00:00:00.000 Started by upstream project "autotest-per-patch" build number 122846 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.081 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.082 The recommended git tool is: git 00:00:00.082 using credential 00000000-0000-0000-0000-000000000002 00:00:00.084 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.133 Fetching changes from the remote Git repository 00:00:00.134 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.176 Using shallow fetch with depth 1 00:00:00.176 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.176 > git --version # timeout=10 00:00:00.219 > git --version # 'git version 2.39.2' 00:00:00.219 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.220 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.220 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.365 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.377 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.391 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:05.391 > git config core.sparsecheckout # timeout=10 00:00:05.402 > git read-tree -mu HEAD # timeout=10 00:00:05.418 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:05.437 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:05.437 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:05.537 [Pipeline] Start of Pipeline 00:00:05.550 [Pipeline] library 00:00:05.551 Loading library shm_lib@master 00:00:05.552 Library shm_lib@master is cached. Copying from home. 00:00:05.605 [Pipeline] node 00:00:20.607 Still waiting to schedule task 00:00:20.607 Waiting for next available executor on ‘vagrant-vm-host’ 00:07:42.712 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:07:42.714 [Pipeline] { 00:07:42.727 [Pipeline] catchError 00:07:42.729 [Pipeline] { 00:07:42.746 [Pipeline] wrap 00:07:42.755 [Pipeline] { 00:07:42.764 [Pipeline] stage 00:07:42.765 [Pipeline] { (Prologue) 00:07:42.788 [Pipeline] echo 00:07:42.789 Node: VM-host-SM9 00:07:42.796 [Pipeline] cleanWs 00:07:42.805 [WS-CLEANUP] Deleting project workspace... 00:07:42.805 [WS-CLEANUP] Deferred wipeout is used... 00:07:42.811 [WS-CLEANUP] done 00:07:42.963 [Pipeline] setCustomBuildProperty 00:07:43.038 [Pipeline] nodesByLabel 00:07:43.040 Found a total of 1 nodes with the 'sorcerer' label 00:07:43.050 [Pipeline] httpRequest 00:07:43.054 HttpMethod: GET 00:07:43.055 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:07:43.055 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:07:43.057 Response Code: HTTP/1.1 200 OK 00:07:43.058 Success: Status code 200 is in the accepted range: 200,404 00:07:43.058 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:07:43.337 [Pipeline] sh 00:07:43.626 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:07:43.646 [Pipeline] httpRequest 00:07:43.651 HttpMethod: GET 00:07:43.651 URL: http://10.211.164.101/packages/spdk_2dc74a001856d1e04b15939137e0bb63d27e8571.tar.gz 00:07:43.652 Sending request to url: http://10.211.164.101/packages/spdk_2dc74a001856d1e04b15939137e0bb63d27e8571.tar.gz 00:07:43.656 Response Code: HTTP/1.1 200 OK 00:07:43.657 Success: Status code 200 is in the accepted range: 200,404 00:07:43.658 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_2dc74a001856d1e04b15939137e0bb63d27e8571.tar.gz 00:07:46.115 [Pipeline] sh 00:07:46.394 + tar --no-same-owner -xf spdk_2dc74a001856d1e04b15939137e0bb63d27e8571.tar.gz 00:07:49.752 [Pipeline] sh 00:07:50.054 + git -C spdk log --oneline -n5 00:07:50.054 2dc74a001 raid: free base bdev earlier during removal 00:07:50.054 6518a98df raid: remove base_bdev_lock 00:07:50.054 96aff3c95 raid: fix some issues in raid_bdev_write_config_json() 00:07:50.054 f9cccaa84 raid: examine other bdevs when starting from superblock 00:07:50.054 688de1b9f raid: factor out a function to get a raid bdev by uuid 00:07:50.075 [Pipeline] writeFile 00:07:50.094 [Pipeline] sh 00:07:50.376 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:07:50.388 [Pipeline] sh 00:07:50.667 + cat autorun-spdk.conf 00:07:50.667 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:50.667 SPDK_TEST_NVMF=1 00:07:50.667 SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:50.667 SPDK_TEST_USDT=1 00:07:50.667 SPDK_TEST_NVMF_MDNS=1 00:07:50.667 SPDK_RUN_UBSAN=1 00:07:50.667 NET_TYPE=virt 00:07:50.667 SPDK_JSONRPC_GO_CLIENT=1 00:07:50.667 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:50.674 RUN_NIGHTLY=0 00:07:50.675 [Pipeline] } 00:07:50.692 [Pipeline] // stage 00:07:50.704 [Pipeline] stage 00:07:50.706 [Pipeline] { (Run VM) 00:07:50.721 [Pipeline] sh 00:07:51.001 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:07:51.001 + echo 'Start stage prepare_nvme.sh' 00:07:51.001 Start stage prepare_nvme.sh 00:07:51.001 + [[ -n 1 ]] 00:07:51.001 + disk_prefix=ex1 00:07:51.001 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:07:51.001 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:07:51.001 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:07:51.001 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:51.001 ++ SPDK_TEST_NVMF=1 00:07:51.001 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:51.001 ++ SPDK_TEST_USDT=1 00:07:51.001 ++ SPDK_TEST_NVMF_MDNS=1 00:07:51.001 ++ SPDK_RUN_UBSAN=1 00:07:51.001 ++ NET_TYPE=virt 00:07:51.001 ++ SPDK_JSONRPC_GO_CLIENT=1 00:07:51.001 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:51.001 ++ RUN_NIGHTLY=0 00:07:51.001 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:07:51.001 + nvme_files=() 00:07:51.001 + declare -A nvme_files 00:07:51.001 + backend_dir=/var/lib/libvirt/images/backends 00:07:51.001 + nvme_files['nvme.img']=5G 00:07:51.001 + nvme_files['nvme-cmb.img']=5G 00:07:51.001 + nvme_files['nvme-multi0.img']=4G 00:07:51.001 + nvme_files['nvme-multi1.img']=4G 00:07:51.001 + nvme_files['nvme-multi2.img']=4G 00:07:51.001 + nvme_files['nvme-openstack.img']=8G 00:07:51.001 + nvme_files['nvme-zns.img']=5G 00:07:51.001 + (( SPDK_TEST_NVME_PMR == 1 )) 00:07:51.001 + (( SPDK_TEST_FTL == 1 )) 00:07:51.001 + (( SPDK_TEST_NVME_FDP == 1 )) 00:07:51.001 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:07:51.001 + for nvme in "${!nvme_files[@]}" 00:07:51.001 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:07:51.001 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:07:51.001 + for nvme in "${!nvme_files[@]}" 00:07:51.001 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:07:51.001 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:07:51.001 + for nvme in "${!nvme_files[@]}" 00:07:51.001 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:07:51.001 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:07:51.001 + for nvme in "${!nvme_files[@]}" 00:07:51.001 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:07:51.001 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:07:51.001 + for nvme in "${!nvme_files[@]}" 00:07:51.001 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:07:51.001 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:07:51.001 + for nvme in "${!nvme_files[@]}" 00:07:51.001 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:07:51.001 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:07:51.001 + for nvme in "${!nvme_files[@]}" 00:07:51.001 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:07:51.936 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:07:51.936 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:07:51.936 + echo 'End stage prepare_nvme.sh' 00:07:51.936 End stage prepare_nvme.sh 00:07:51.949 [Pipeline] sh 00:07:52.229 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:07:52.229 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora38 00:07:52.229 00:07:52.229 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:07:52.229 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:07:52.229 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:07:52.229 HELP=0 00:07:52.229 DRY_RUN=0 00:07:52.229 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:07:52.229 NVME_DISKS_TYPE=nvme,nvme, 00:07:52.229 NVME_AUTO_CREATE=0 00:07:52.229 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:07:52.229 NVME_CMB=,, 00:07:52.229 NVME_PMR=,, 00:07:52.229 NVME_ZNS=,, 00:07:52.229 NVME_MS=,, 00:07:52.229 NVME_FDP=,, 00:07:52.229 SPDK_VAGRANT_DISTRO=fedora38 00:07:52.229 SPDK_VAGRANT_VMCPU=10 00:07:52.229 SPDK_VAGRANT_VMRAM=12288 00:07:52.229 SPDK_VAGRANT_PROVIDER=libvirt 00:07:52.229 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:07:52.229 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:07:52.229 SPDK_OPENSTACK_NETWORK=0 00:07:52.229 VAGRANT_PACKAGE_BOX=0 00:07:52.229 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:07:52.229 FORCE_DISTRO=true 00:07:52.229 VAGRANT_BOX_VERSION= 00:07:52.229 EXTRA_VAGRANTFILES= 00:07:52.229 NIC_MODEL=e1000 00:07:52.229 00:07:52.229 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:07:52.229 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:07:55.514 Bringing machine 'default' up with 'libvirt' provider... 00:07:56.082 ==> default: Creating image (snapshot of base box volume). 00:07:56.341 ==> default: Creating domain with the following settings... 00:07:56.341 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1715738923_b4543be80fbc77504877 00:07:56.341 ==> default: -- Domain type: kvm 00:07:56.341 ==> default: -- Cpus: 10 00:07:56.342 ==> default: -- Feature: acpi 00:07:56.342 ==> default: -- Feature: apic 00:07:56.342 ==> default: -- Feature: pae 00:07:56.342 ==> default: -- Memory: 12288M 00:07:56.342 ==> default: -- Memory Backing: hugepages: 00:07:56.342 ==> default: -- Management MAC: 00:07:56.342 ==> default: -- Loader: 00:07:56.342 ==> default: -- Nvram: 00:07:56.342 ==> default: -- Base box: spdk/fedora38 00:07:56.342 ==> default: -- Storage pool: default 00:07:56.342 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1715738923_b4543be80fbc77504877.img (20G) 00:07:56.342 ==> default: -- Volume Cache: default 00:07:56.342 ==> default: -- Kernel: 00:07:56.342 ==> default: -- Initrd: 00:07:56.342 ==> default: -- Graphics Type: vnc 00:07:56.342 ==> default: -- Graphics Port: -1 00:07:56.342 ==> default: -- Graphics IP: 127.0.0.1 00:07:56.342 ==> default: -- Graphics Password: Not defined 00:07:56.342 ==> default: -- Video Type: cirrus 00:07:56.342 ==> default: -- Video VRAM: 9216 00:07:56.342 ==> default: -- Sound Type: 00:07:56.342 ==> default: -- Keymap: en-us 00:07:56.342 ==> default: -- TPM Path: 00:07:56.342 ==> default: -- INPUT: type=mouse, bus=ps2 00:07:56.342 ==> default: -- Command line args: 00:07:56.342 ==> default: -> value=-device, 00:07:56.342 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:07:56.342 ==> default: -> value=-drive, 00:07:56.342 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:07:56.342 ==> default: -> value=-device, 00:07:56.342 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:56.342 ==> default: -> value=-device, 00:07:56.342 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:07:56.342 ==> default: -> value=-drive, 00:07:56.342 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:07:56.342 ==> default: -> value=-device, 00:07:56.342 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:56.342 ==> default: -> value=-drive, 00:07:56.342 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:07:56.342 ==> default: -> value=-device, 00:07:56.342 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:56.342 ==> default: -> value=-drive, 00:07:56.342 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:07:56.342 ==> default: -> value=-device, 00:07:56.342 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:56.342 ==> default: Creating shared folders metadata... 00:07:56.342 ==> default: Starting domain. 00:07:57.719 ==> default: Waiting for domain to get an IP address... 00:08:19.639 ==> default: Waiting for SSH to become available... 00:08:19.639 ==> default: Configuring and enabling network interfaces... 00:08:21.014 default: SSH address: 192.168.121.202:22 00:08:21.014 default: SSH username: vagrant 00:08:21.014 default: SSH auth method: private key 00:08:23.546 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:08:31.664 ==> default: Mounting SSHFS shared folder... 00:08:32.229 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:08:32.229 ==> default: Checking Mount.. 00:08:33.599 ==> default: Folder Successfully Mounted! 00:08:33.599 ==> default: Running provisioner: file... 00:08:34.236 default: ~/.gitconfig => .gitconfig 00:08:34.495 00:08:34.495 SUCCESS! 00:08:34.495 00:08:34.495 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:08:34.495 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:08:34.495 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:08:34.495 00:08:34.760 [Pipeline] } 00:08:34.777 [Pipeline] // stage 00:08:34.785 [Pipeline] dir 00:08:34.785 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:08:34.786 [Pipeline] { 00:08:34.799 [Pipeline] catchError 00:08:34.801 [Pipeline] { 00:08:34.813 [Pipeline] sh 00:08:35.090 + + vagrant ssh-config --host vagrant 00:08:35.090 sed -ne /^Host/,$p 00:08:35.090 + tee ssh_conf 00:08:39.285 Host vagrant 00:08:39.285 HostName 192.168.121.202 00:08:39.285 User vagrant 00:08:39.285 Port 22 00:08:39.285 UserKnownHostsFile /dev/null 00:08:39.285 StrictHostKeyChecking no 00:08:39.285 PasswordAuthentication no 00:08:39.285 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:08:39.285 IdentitiesOnly yes 00:08:39.285 LogLevel FATAL 00:08:39.285 ForwardAgent yes 00:08:39.285 ForwardX11 yes 00:08:39.285 00:08:39.297 [Pipeline] withEnv 00:08:39.299 [Pipeline] { 00:08:39.314 [Pipeline] sh 00:08:39.585 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:08:39.585 source /etc/os-release 00:08:39.585 [[ -e /image.version ]] && img=$(< /image.version) 00:08:39.585 # Minimal, systemd-like check. 00:08:39.585 if [[ -e /.dockerenv ]]; then 00:08:39.585 # Clear garbage from the node's name: 00:08:39.585 # agt-er_autotest_547-896 -> autotest_547-896 00:08:39.585 # $HOSTNAME is the actual container id 00:08:39.585 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:08:39.585 if mountpoint -q /etc/hostname; then 00:08:39.585 # We can assume this is a mount from a host where container is running, 00:08:39.585 # so fetch its hostname to easily identify the target swarm worker. 00:08:39.585 container="$(< /etc/hostname) ($agent)" 00:08:39.585 else 00:08:39.585 # Fallback 00:08:39.585 container=$agent 00:08:39.585 fi 00:08:39.585 fi 00:08:39.585 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:08:39.585 00:08:39.855 [Pipeline] } 00:08:39.877 [Pipeline] // withEnv 00:08:39.887 [Pipeline] setCustomBuildProperty 00:08:39.903 [Pipeline] stage 00:08:39.905 [Pipeline] { (Tests) 00:08:39.924 [Pipeline] sh 00:08:40.200 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:08:40.473 [Pipeline] timeout 00:08:40.473 Timeout set to expire in 40 min 00:08:40.475 [Pipeline] { 00:08:40.493 [Pipeline] sh 00:08:40.771 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:08:41.337 HEAD is now at 2dc74a001 raid: free base bdev earlier during removal 00:08:41.351 [Pipeline] sh 00:08:41.634 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:08:41.908 [Pipeline] sh 00:08:42.188 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:08:42.460 [Pipeline] sh 00:08:42.739 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:08:42.739 ++ readlink -f spdk_repo 00:08:42.739 + DIR_ROOT=/home/vagrant/spdk_repo 00:08:42.739 + [[ -n /home/vagrant/spdk_repo ]] 00:08:42.739 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:08:42.739 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:08:42.739 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:08:42.739 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:08:42.739 + [[ -d /home/vagrant/spdk_repo/output ]] 00:08:42.739 + cd /home/vagrant/spdk_repo 00:08:42.997 + source /etc/os-release 00:08:42.997 ++ NAME='Fedora Linux' 00:08:42.997 ++ VERSION='38 (Cloud Edition)' 00:08:42.997 ++ ID=fedora 00:08:42.997 ++ VERSION_ID=38 00:08:42.997 ++ VERSION_CODENAME= 00:08:42.997 ++ PLATFORM_ID=platform:f38 00:08:42.997 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:08:42.997 ++ ANSI_COLOR='0;38;2;60;110;180' 00:08:42.997 ++ LOGO=fedora-logo-icon 00:08:42.997 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:08:42.997 ++ HOME_URL=https://fedoraproject.org/ 00:08:42.997 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:08:42.997 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:08:42.997 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:08:42.997 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:08:42.997 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:08:42.997 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:08:42.997 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:08:42.997 ++ SUPPORT_END=2024-05-14 00:08:42.997 ++ VARIANT='Cloud Edition' 00:08:42.997 ++ VARIANT_ID=cloud 00:08:42.997 + uname -a 00:08:42.997 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:08:42.997 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:43.256 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:43.256 Hugepages 00:08:43.256 node hugesize free / total 00:08:43.256 node0 1048576kB 0 / 0 00:08:43.256 node0 2048kB 0 / 0 00:08:43.256 00:08:43.256 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:43.256 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:43.256 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:43.515 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:43.515 + rm -f /tmp/spdk-ld-path 00:08:43.515 + source autorun-spdk.conf 00:08:43.515 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:43.515 ++ SPDK_TEST_NVMF=1 00:08:43.515 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:43.515 ++ SPDK_TEST_USDT=1 00:08:43.515 ++ SPDK_TEST_NVMF_MDNS=1 00:08:43.515 ++ SPDK_RUN_UBSAN=1 00:08:43.515 ++ NET_TYPE=virt 00:08:43.515 ++ SPDK_JSONRPC_GO_CLIENT=1 00:08:43.515 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:43.515 ++ RUN_NIGHTLY=0 00:08:43.515 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:08:43.515 + [[ -n '' ]] 00:08:43.515 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:08:43.515 + for M in /var/spdk/build-*-manifest.txt 00:08:43.515 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:08:43.515 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:43.515 + for M in /var/spdk/build-*-manifest.txt 00:08:43.515 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:08:43.515 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:43.515 ++ uname 00:08:43.515 + [[ Linux == \L\i\n\u\x ]] 00:08:43.515 + sudo dmesg -T 00:08:43.515 + sudo dmesg --clear 00:08:43.515 + dmesg_pid=5146 00:08:43.515 + [[ Fedora Linux == FreeBSD ]] 00:08:43.515 + sudo dmesg -Tw 00:08:43.515 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:43.515 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:43.515 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:08:43.515 + [[ -x /usr/src/fio-static/fio ]] 00:08:43.515 + export FIO_BIN=/usr/src/fio-static/fio 00:08:43.515 + FIO_BIN=/usr/src/fio-static/fio 00:08:43.515 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:08:43.515 + [[ ! -v VFIO_QEMU_BIN ]] 00:08:43.515 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:08:43.515 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:43.515 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:43.515 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:08:43.515 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:43.515 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:43.515 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:43.515 Test configuration: 00:08:43.515 SPDK_RUN_FUNCTIONAL_TEST=1 00:08:43.515 SPDK_TEST_NVMF=1 00:08:43.515 SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:43.515 SPDK_TEST_USDT=1 00:08:43.515 SPDK_TEST_NVMF_MDNS=1 00:08:43.515 SPDK_RUN_UBSAN=1 00:08:43.515 NET_TYPE=virt 00:08:43.515 SPDK_JSONRPC_GO_CLIENT=1 00:08:43.515 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:43.515 RUN_NIGHTLY=0 02:09:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.515 02:09:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:43.515 02:09:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.515 02:09:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.515 02:09:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.515 02:09:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.515 02:09:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.515 02:09:31 -- paths/export.sh@5 -- $ export PATH 00:08:43.515 02:09:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.515 02:09:31 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:08:43.515 02:09:31 -- common/autobuild_common.sh@437 -- $ date +%s 00:08:43.515 02:09:31 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715738971.XXXXXX 00:08:43.515 02:09:31 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715738971.AjZICz 00:08:43.515 02:09:31 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:08:43.515 02:09:31 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:08:43.515 02:09:31 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:08:43.515 02:09:31 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:08:43.515 02:09:31 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:08:43.515 02:09:31 -- common/autobuild_common.sh@453 -- $ get_config_params 00:08:43.515 02:09:31 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:08:43.515 02:09:31 -- common/autotest_common.sh@10 -- $ set +x 00:08:43.773 02:09:31 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:08:43.773 02:09:31 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:08:43.773 02:09:31 -- pm/common@17 -- $ local monitor 00:08:43.773 02:09:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:43.773 02:09:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:43.773 02:09:31 -- pm/common@25 -- $ sleep 1 00:08:43.773 02:09:31 -- pm/common@21 -- $ date +%s 00:08:43.773 02:09:31 -- pm/common@21 -- $ date +%s 00:08:43.773 02:09:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715738971 00:08:43.773 02:09:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715738971 00:08:43.774 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715738971_collect-cpu-load.pm.log 00:08:43.774 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715738971_collect-vmstat.pm.log 00:08:44.709 02:09:32 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:08:44.709 02:09:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:08:44.709 02:09:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:08:44.709 02:09:32 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:08:44.709 02:09:32 -- spdk/autobuild.sh@16 -- $ date -u 00:08:44.709 Wed May 15 02:09:32 AM UTC 2024 00:08:44.709 02:09:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:08:44.709 v24.05-pre-653-g2dc74a001 00:08:44.709 02:09:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:08:44.709 02:09:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:08:44.709 02:09:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:08:44.709 02:09:32 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:08:44.709 02:09:32 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:08:44.709 02:09:32 -- common/autotest_common.sh@10 -- $ set +x 00:08:44.709 ************************************ 00:08:44.709 START TEST ubsan 00:08:44.709 ************************************ 00:08:44.709 using ubsan 00:08:44.709 02:09:32 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:08:44.709 00:08:44.709 real 0m0.000s 00:08:44.709 user 0m0.000s 00:08:44.709 sys 0m0.000s 00:08:44.709 02:09:32 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:08:44.709 ************************************ 00:08:44.709 END TEST ubsan 00:08:44.709 02:09:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:08:44.709 ************************************ 00:08:44.709 02:09:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:08:44.709 02:09:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:08:44.709 02:09:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:08:44.709 02:09:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:08:44.709 02:09:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:08:44.709 02:09:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:08:44.709 02:09:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:08:44.709 02:09:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:08:44.709 02:09:32 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:08:44.709 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:44.709 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:45.276 Using 'verbs' RDMA provider 00:08:58.044 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:09:10.254 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:09:10.254 go version go1.21.1 linux/amd64 00:09:10.821 Creating mk/config.mk...done. 00:09:10.821 Creating mk/cc.flags.mk...done. 00:09:10.821 Type 'make' to build. 00:09:10.821 02:09:58 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:09:10.821 02:09:58 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:09:10.821 02:09:58 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:09:10.821 02:09:58 -- common/autotest_common.sh@10 -- $ set +x 00:09:10.821 ************************************ 00:09:10.821 START TEST make 00:09:10.822 ************************************ 00:09:10.822 02:09:58 make -- common/autotest_common.sh@1121 -- $ make -j10 00:09:11.082 make[1]: Nothing to be done for 'all'. 00:09:25.950 The Meson build system 00:09:25.950 Version: 1.3.1 00:09:25.950 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:09:25.950 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:09:25.950 Build type: native build 00:09:25.950 Program cat found: YES (/usr/bin/cat) 00:09:25.950 Project name: DPDK 00:09:25.950 Project version: 23.11.0 00:09:25.950 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:09:25.950 C linker for the host machine: cc ld.bfd 2.39-16 00:09:25.950 Host machine cpu family: x86_64 00:09:25.950 Host machine cpu: x86_64 00:09:25.950 Message: ## Building in Developer Mode ## 00:09:25.950 Program pkg-config found: YES (/usr/bin/pkg-config) 00:09:25.950 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:09:25.950 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:09:25.950 Program python3 found: YES (/usr/bin/python3) 00:09:25.950 Program cat found: YES (/usr/bin/cat) 00:09:25.950 Compiler for C supports arguments -march=native: YES 00:09:25.950 Checking for size of "void *" : 8 00:09:25.950 Checking for size of "void *" : 8 (cached) 00:09:25.950 Library m found: YES 00:09:25.950 Library numa found: YES 00:09:25.950 Has header "numaif.h" : YES 00:09:25.950 Library fdt found: NO 00:09:25.950 Library execinfo found: NO 00:09:25.950 Has header "execinfo.h" : YES 00:09:25.950 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:09:25.950 Run-time dependency libarchive found: NO (tried pkgconfig) 00:09:25.950 Run-time dependency libbsd found: NO (tried pkgconfig) 00:09:25.950 Run-time dependency jansson found: NO (tried pkgconfig) 00:09:25.950 Run-time dependency openssl found: YES 3.0.9 00:09:25.950 Run-time dependency libpcap found: YES 1.10.4 00:09:25.950 Has header "pcap.h" with dependency libpcap: YES 00:09:25.950 Compiler for C supports arguments -Wcast-qual: YES 00:09:25.950 Compiler for C supports arguments -Wdeprecated: YES 00:09:25.950 Compiler for C supports arguments -Wformat: YES 00:09:25.950 Compiler for C supports arguments -Wformat-nonliteral: NO 00:09:25.950 Compiler for C supports arguments -Wformat-security: NO 00:09:25.950 Compiler for C supports arguments -Wmissing-declarations: YES 00:09:25.950 Compiler for C supports arguments -Wmissing-prototypes: YES 00:09:25.950 Compiler for C supports arguments -Wnested-externs: YES 00:09:25.950 Compiler for C supports arguments -Wold-style-definition: YES 00:09:25.950 Compiler for C supports arguments -Wpointer-arith: YES 00:09:25.950 Compiler for C supports arguments -Wsign-compare: YES 00:09:25.950 Compiler for C supports arguments -Wstrict-prototypes: YES 00:09:25.950 Compiler for C supports arguments -Wundef: YES 00:09:25.950 Compiler for C supports arguments -Wwrite-strings: YES 00:09:25.950 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:09:25.950 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:09:25.950 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:09:25.950 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:09:25.950 Program objdump found: YES (/usr/bin/objdump) 00:09:25.950 Compiler for C supports arguments -mavx512f: YES 00:09:25.950 Checking if "AVX512 checking" compiles: YES 00:09:25.950 Fetching value of define "__SSE4_2__" : 1 00:09:25.950 Fetching value of define "__AES__" : 1 00:09:25.950 Fetching value of define "__AVX__" : 1 00:09:25.950 Fetching value of define "__AVX2__" : 1 00:09:25.950 Fetching value of define "__AVX512BW__" : (undefined) 00:09:25.950 Fetching value of define "__AVX512CD__" : (undefined) 00:09:25.950 Fetching value of define "__AVX512DQ__" : (undefined) 00:09:25.950 Fetching value of define "__AVX512F__" : (undefined) 00:09:25.950 Fetching value of define "__AVX512VL__" : (undefined) 00:09:25.950 Fetching value of define "__PCLMUL__" : 1 00:09:25.950 Fetching value of define "__RDRND__" : 1 00:09:25.950 Fetching value of define "__RDSEED__" : 1 00:09:25.950 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:09:25.950 Fetching value of define "__znver1__" : (undefined) 00:09:25.950 Fetching value of define "__znver2__" : (undefined) 00:09:25.950 Fetching value of define "__znver3__" : (undefined) 00:09:25.950 Fetching value of define "__znver4__" : (undefined) 00:09:25.950 Compiler for C supports arguments -Wno-format-truncation: YES 00:09:25.950 Message: lib/log: Defining dependency "log" 00:09:25.950 Message: lib/kvargs: Defining dependency "kvargs" 00:09:25.950 Message: lib/telemetry: Defining dependency "telemetry" 00:09:25.950 Checking for function "getentropy" : NO 00:09:25.950 Message: lib/eal: Defining dependency "eal" 00:09:25.950 Message: lib/ring: Defining dependency "ring" 00:09:25.950 Message: lib/rcu: Defining dependency "rcu" 00:09:25.950 Message: lib/mempool: Defining dependency "mempool" 00:09:25.950 Message: lib/mbuf: Defining dependency "mbuf" 00:09:25.950 Fetching value of define "__PCLMUL__" : 1 (cached) 00:09:25.950 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:09:25.950 Compiler for C supports arguments -mpclmul: YES 00:09:25.950 Compiler for C supports arguments -maes: YES 00:09:25.950 Compiler for C supports arguments -mavx512f: YES (cached) 00:09:25.950 Compiler for C supports arguments -mavx512bw: YES 00:09:25.950 Compiler for C supports arguments -mavx512dq: YES 00:09:25.950 Compiler for C supports arguments -mavx512vl: YES 00:09:25.950 Compiler for C supports arguments -mvpclmulqdq: YES 00:09:25.950 Compiler for C supports arguments -mavx2: YES 00:09:25.950 Compiler for C supports arguments -mavx: YES 00:09:25.950 Message: lib/net: Defining dependency "net" 00:09:25.950 Message: lib/meter: Defining dependency "meter" 00:09:25.950 Message: lib/ethdev: Defining dependency "ethdev" 00:09:25.950 Message: lib/pci: Defining dependency "pci" 00:09:25.950 Message: lib/cmdline: Defining dependency "cmdline" 00:09:25.950 Message: lib/hash: Defining dependency "hash" 00:09:25.950 Message: lib/timer: Defining dependency "timer" 00:09:25.950 Message: lib/compressdev: Defining dependency "compressdev" 00:09:25.950 Message: lib/cryptodev: Defining dependency "cryptodev" 00:09:25.950 Message: lib/dmadev: Defining dependency "dmadev" 00:09:25.950 Compiler for C supports arguments -Wno-cast-qual: YES 00:09:25.950 Message: lib/power: Defining dependency "power" 00:09:25.950 Message: lib/reorder: Defining dependency "reorder" 00:09:25.950 Message: lib/security: Defining dependency "security" 00:09:25.950 Has header "linux/userfaultfd.h" : YES 00:09:25.950 Has header "linux/vduse.h" : YES 00:09:25.950 Message: lib/vhost: Defining dependency "vhost" 00:09:25.950 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:09:25.950 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:09:25.950 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:09:25.950 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:09:25.950 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:09:25.950 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:09:25.950 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:09:25.950 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:09:25.950 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:09:25.950 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:09:25.950 Program doxygen found: YES (/usr/bin/doxygen) 00:09:25.950 Configuring doxy-api-html.conf using configuration 00:09:25.950 Configuring doxy-api-man.conf using configuration 00:09:25.950 Program mandb found: YES (/usr/bin/mandb) 00:09:25.950 Program sphinx-build found: NO 00:09:25.950 Configuring rte_build_config.h using configuration 00:09:25.950 Message: 00:09:25.950 ================= 00:09:25.950 Applications Enabled 00:09:25.950 ================= 00:09:25.950 00:09:25.950 apps: 00:09:25.950 00:09:25.950 00:09:25.950 Message: 00:09:25.950 ================= 00:09:25.950 Libraries Enabled 00:09:25.950 ================= 00:09:25.950 00:09:25.950 libs: 00:09:25.950 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:09:25.950 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:09:25.950 cryptodev, dmadev, power, reorder, security, vhost, 00:09:25.950 00:09:25.950 Message: 00:09:25.950 =============== 00:09:25.950 Drivers Enabled 00:09:25.950 =============== 00:09:25.950 00:09:25.950 common: 00:09:25.950 00:09:25.950 bus: 00:09:25.950 pci, vdev, 00:09:25.950 mempool: 00:09:25.950 ring, 00:09:25.950 dma: 00:09:25.950 00:09:25.950 net: 00:09:25.950 00:09:25.950 crypto: 00:09:25.950 00:09:25.950 compress: 00:09:25.950 00:09:25.950 vdpa: 00:09:25.950 00:09:25.950 00:09:25.950 Message: 00:09:25.950 ================= 00:09:25.950 Content Skipped 00:09:25.950 ================= 00:09:25.950 00:09:25.950 apps: 00:09:25.950 dumpcap: explicitly disabled via build config 00:09:25.950 graph: explicitly disabled via build config 00:09:25.950 pdump: explicitly disabled via build config 00:09:25.950 proc-info: explicitly disabled via build config 00:09:25.950 test-acl: explicitly disabled via build config 00:09:25.950 test-bbdev: explicitly disabled via build config 00:09:25.950 test-cmdline: explicitly disabled via build config 00:09:25.950 test-compress-perf: explicitly disabled via build config 00:09:25.950 test-crypto-perf: explicitly disabled via build config 00:09:25.950 test-dma-perf: explicitly disabled via build config 00:09:25.950 test-eventdev: explicitly disabled via build config 00:09:25.950 test-fib: explicitly disabled via build config 00:09:25.950 test-flow-perf: explicitly disabled via build config 00:09:25.950 test-gpudev: explicitly disabled via build config 00:09:25.950 test-mldev: explicitly disabled via build config 00:09:25.950 test-pipeline: explicitly disabled via build config 00:09:25.950 test-pmd: explicitly disabled via build config 00:09:25.950 test-regex: explicitly disabled via build config 00:09:25.950 test-sad: explicitly disabled via build config 00:09:25.950 test-security-perf: explicitly disabled via build config 00:09:25.950 00:09:25.950 libs: 00:09:25.951 metrics: explicitly disabled via build config 00:09:25.951 acl: explicitly disabled via build config 00:09:25.951 bbdev: explicitly disabled via build config 00:09:25.951 bitratestats: explicitly disabled via build config 00:09:25.951 bpf: explicitly disabled via build config 00:09:25.951 cfgfile: explicitly disabled via build config 00:09:25.951 distributor: explicitly disabled via build config 00:09:25.951 efd: explicitly disabled via build config 00:09:25.951 eventdev: explicitly disabled via build config 00:09:25.951 dispatcher: explicitly disabled via build config 00:09:25.951 gpudev: explicitly disabled via build config 00:09:25.951 gro: explicitly disabled via build config 00:09:25.951 gso: explicitly disabled via build config 00:09:25.951 ip_frag: explicitly disabled via build config 00:09:25.951 jobstats: explicitly disabled via build config 00:09:25.951 latencystats: explicitly disabled via build config 00:09:25.951 lpm: explicitly disabled via build config 00:09:25.951 member: explicitly disabled via build config 00:09:25.951 pcapng: explicitly disabled via build config 00:09:25.951 rawdev: explicitly disabled via build config 00:09:25.951 regexdev: explicitly disabled via build config 00:09:25.951 mldev: explicitly disabled via build config 00:09:25.951 rib: explicitly disabled via build config 00:09:25.951 sched: explicitly disabled via build config 00:09:25.951 stack: explicitly disabled via build config 00:09:25.951 ipsec: explicitly disabled via build config 00:09:25.951 pdcp: explicitly disabled via build config 00:09:25.951 fib: explicitly disabled via build config 00:09:25.951 port: explicitly disabled via build config 00:09:25.951 pdump: explicitly disabled via build config 00:09:25.951 table: explicitly disabled via build config 00:09:25.951 pipeline: explicitly disabled via build config 00:09:25.951 graph: explicitly disabled via build config 00:09:25.951 node: explicitly disabled via build config 00:09:25.951 00:09:25.951 drivers: 00:09:25.951 common/cpt: not in enabled drivers build config 00:09:25.951 common/dpaax: not in enabled drivers build config 00:09:25.951 common/iavf: not in enabled drivers build config 00:09:25.951 common/idpf: not in enabled drivers build config 00:09:25.951 common/mvep: not in enabled drivers build config 00:09:25.951 common/octeontx: not in enabled drivers build config 00:09:25.951 bus/auxiliary: not in enabled drivers build config 00:09:25.951 bus/cdx: not in enabled drivers build config 00:09:25.951 bus/dpaa: not in enabled drivers build config 00:09:25.951 bus/fslmc: not in enabled drivers build config 00:09:25.951 bus/ifpga: not in enabled drivers build config 00:09:25.951 bus/platform: not in enabled drivers build config 00:09:25.951 bus/vmbus: not in enabled drivers build config 00:09:25.951 common/cnxk: not in enabled drivers build config 00:09:25.951 common/mlx5: not in enabled drivers build config 00:09:25.951 common/nfp: not in enabled drivers build config 00:09:25.951 common/qat: not in enabled drivers build config 00:09:25.951 common/sfc_efx: not in enabled drivers build config 00:09:25.951 mempool/bucket: not in enabled drivers build config 00:09:25.951 mempool/cnxk: not in enabled drivers build config 00:09:25.951 mempool/dpaa: not in enabled drivers build config 00:09:25.951 mempool/dpaa2: not in enabled drivers build config 00:09:25.951 mempool/octeontx: not in enabled drivers build config 00:09:25.951 mempool/stack: not in enabled drivers build config 00:09:25.951 dma/cnxk: not in enabled drivers build config 00:09:25.951 dma/dpaa: not in enabled drivers build config 00:09:25.951 dma/dpaa2: not in enabled drivers build config 00:09:25.951 dma/hisilicon: not in enabled drivers build config 00:09:25.951 dma/idxd: not in enabled drivers build config 00:09:25.951 dma/ioat: not in enabled drivers build config 00:09:25.951 dma/skeleton: not in enabled drivers build config 00:09:25.951 net/af_packet: not in enabled drivers build config 00:09:25.951 net/af_xdp: not in enabled drivers build config 00:09:25.951 net/ark: not in enabled drivers build config 00:09:25.951 net/atlantic: not in enabled drivers build config 00:09:25.951 net/avp: not in enabled drivers build config 00:09:25.951 net/axgbe: not in enabled drivers build config 00:09:25.951 net/bnx2x: not in enabled drivers build config 00:09:25.951 net/bnxt: not in enabled drivers build config 00:09:25.951 net/bonding: not in enabled drivers build config 00:09:25.951 net/cnxk: not in enabled drivers build config 00:09:25.951 net/cpfl: not in enabled drivers build config 00:09:25.951 net/cxgbe: not in enabled drivers build config 00:09:25.951 net/dpaa: not in enabled drivers build config 00:09:25.951 net/dpaa2: not in enabled drivers build config 00:09:25.951 net/e1000: not in enabled drivers build config 00:09:25.951 net/ena: not in enabled drivers build config 00:09:25.951 net/enetc: not in enabled drivers build config 00:09:25.951 net/enetfec: not in enabled drivers build config 00:09:25.951 net/enic: not in enabled drivers build config 00:09:25.951 net/failsafe: not in enabled drivers build config 00:09:25.951 net/fm10k: not in enabled drivers build config 00:09:25.951 net/gve: not in enabled drivers build config 00:09:25.951 net/hinic: not in enabled drivers build config 00:09:25.951 net/hns3: not in enabled drivers build config 00:09:25.951 net/i40e: not in enabled drivers build config 00:09:25.951 net/iavf: not in enabled drivers build config 00:09:25.951 net/ice: not in enabled drivers build config 00:09:25.951 net/idpf: not in enabled drivers build config 00:09:25.951 net/igc: not in enabled drivers build config 00:09:25.951 net/ionic: not in enabled drivers build config 00:09:25.951 net/ipn3ke: not in enabled drivers build config 00:09:25.951 net/ixgbe: not in enabled drivers build config 00:09:25.951 net/mana: not in enabled drivers build config 00:09:25.951 net/memif: not in enabled drivers build config 00:09:25.951 net/mlx4: not in enabled drivers build config 00:09:25.951 net/mlx5: not in enabled drivers build config 00:09:25.951 net/mvneta: not in enabled drivers build config 00:09:25.951 net/mvpp2: not in enabled drivers build config 00:09:25.951 net/netvsc: not in enabled drivers build config 00:09:25.951 net/nfb: not in enabled drivers build config 00:09:25.951 net/nfp: not in enabled drivers build config 00:09:25.951 net/ngbe: not in enabled drivers build config 00:09:25.951 net/null: not in enabled drivers build config 00:09:25.951 net/octeontx: not in enabled drivers build config 00:09:25.951 net/octeon_ep: not in enabled drivers build config 00:09:25.951 net/pcap: not in enabled drivers build config 00:09:25.951 net/pfe: not in enabled drivers build config 00:09:25.951 net/qede: not in enabled drivers build config 00:09:25.951 net/ring: not in enabled drivers build config 00:09:25.951 net/sfc: not in enabled drivers build config 00:09:25.951 net/softnic: not in enabled drivers build config 00:09:25.951 net/tap: not in enabled drivers build config 00:09:25.951 net/thunderx: not in enabled drivers build config 00:09:25.951 net/txgbe: not in enabled drivers build config 00:09:25.951 net/vdev_netvsc: not in enabled drivers build config 00:09:25.951 net/vhost: not in enabled drivers build config 00:09:25.951 net/virtio: not in enabled drivers build config 00:09:25.951 net/vmxnet3: not in enabled drivers build config 00:09:25.951 raw/*: missing internal dependency, "rawdev" 00:09:25.951 crypto/armv8: not in enabled drivers build config 00:09:25.951 crypto/bcmfs: not in enabled drivers build config 00:09:25.951 crypto/caam_jr: not in enabled drivers build config 00:09:25.951 crypto/ccp: not in enabled drivers build config 00:09:25.951 crypto/cnxk: not in enabled drivers build config 00:09:25.951 crypto/dpaa_sec: not in enabled drivers build config 00:09:25.951 crypto/dpaa2_sec: not in enabled drivers build config 00:09:25.951 crypto/ipsec_mb: not in enabled drivers build config 00:09:25.951 crypto/mlx5: not in enabled drivers build config 00:09:25.951 crypto/mvsam: not in enabled drivers build config 00:09:25.951 crypto/nitrox: not in enabled drivers build config 00:09:25.951 crypto/null: not in enabled drivers build config 00:09:25.951 crypto/octeontx: not in enabled drivers build config 00:09:25.951 crypto/openssl: not in enabled drivers build config 00:09:25.951 crypto/scheduler: not in enabled drivers build config 00:09:25.951 crypto/uadk: not in enabled drivers build config 00:09:25.951 crypto/virtio: not in enabled drivers build config 00:09:25.951 compress/isal: not in enabled drivers build config 00:09:25.951 compress/mlx5: not in enabled drivers build config 00:09:25.951 compress/octeontx: not in enabled drivers build config 00:09:25.951 compress/zlib: not in enabled drivers build config 00:09:25.951 regex/*: missing internal dependency, "regexdev" 00:09:25.951 ml/*: missing internal dependency, "mldev" 00:09:25.951 vdpa/ifc: not in enabled drivers build config 00:09:25.951 vdpa/mlx5: not in enabled drivers build config 00:09:25.951 vdpa/nfp: not in enabled drivers build config 00:09:25.951 vdpa/sfc: not in enabled drivers build config 00:09:25.951 event/*: missing internal dependency, "eventdev" 00:09:25.951 baseband/*: missing internal dependency, "bbdev" 00:09:25.951 gpu/*: missing internal dependency, "gpudev" 00:09:25.951 00:09:25.951 00:09:25.951 Build targets in project: 85 00:09:25.951 00:09:25.951 DPDK 23.11.0 00:09:25.951 00:09:25.951 User defined options 00:09:25.951 buildtype : debug 00:09:25.951 default_library : shared 00:09:25.951 libdir : lib 00:09:25.951 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:25.951 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:09:25.951 c_link_args : 00:09:25.951 cpu_instruction_set: native 00:09:25.951 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:09:25.951 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:09:25.951 enable_docs : false 00:09:25.951 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:09:25.951 enable_kmods : false 00:09:25.951 tests : false 00:09:25.951 00:09:25.951 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:09:25.951 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:09:25.951 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:09:25.951 [2/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:09:25.951 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:09:25.951 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:09:25.951 [5/265] Linking static target lib/librte_log.a 00:09:25.951 [6/265] Linking static target lib/librte_kvargs.a 00:09:25.951 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:09:25.951 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:09:25.951 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:09:25.952 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:09:25.952 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:09:25.952 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:09:25.952 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:09:25.952 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:09:25.952 [15/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:09:25.952 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:09:25.952 [17/265] Linking target lib/librte_log.so.24.0 00:09:25.952 [18/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:09:25.952 [19/265] Linking static target lib/librte_telemetry.a 00:09:25.952 [20/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:09:25.952 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:09:25.952 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:09:25.952 [23/265] Linking target lib/librte_kvargs.so.24.0 00:09:25.952 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:09:25.952 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:09:25.952 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:09:25.952 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:09:26.210 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:09:26.210 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:09:26.210 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:09:26.468 [31/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:09:26.468 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:09:26.468 [33/265] Linking target lib/librte_telemetry.so.24.0 00:09:26.468 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:09:26.727 [35/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:09:26.985 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:09:26.985 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:09:26.985 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:09:26.985 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:09:26.985 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:09:26.985 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:09:26.985 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:09:27.243 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:09:27.243 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:09:27.243 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:09:27.243 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:09:27.502 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:09:27.760 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:09:27.760 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:09:27.760 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:09:27.760 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:09:28.017 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:09:28.017 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:09:28.017 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:09:28.276 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:09:28.276 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:09:28.276 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:09:28.276 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:09:28.276 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:09:28.276 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:09:28.534 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:09:28.534 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:09:28.534 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:09:28.792 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:09:28.792 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:09:29.071 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:09:29.071 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:09:29.071 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:09:29.071 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:09:29.362 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:09:29.362 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:09:29.362 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:09:29.362 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:09:29.362 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:09:29.362 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:09:29.362 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:09:29.362 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:09:29.620 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:09:29.878 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:09:29.878 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:09:30.137 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:09:30.398 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:09:30.398 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:09:30.398 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:09:30.398 [85/265] Linking static target lib/librte_ring.a 00:09:30.398 [86/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:09:30.658 [87/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:09:30.658 [88/265] Linking static target lib/librte_eal.a 00:09:30.658 [89/265] Linking static target lib/librte_rcu.a 00:09:30.658 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:09:30.658 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:09:30.658 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:09:30.920 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:09:30.920 [94/265] Linking static target lib/librte_mempool.a 00:09:31.184 [95/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:09:31.184 [96/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:09:31.184 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:09:31.449 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:09:31.449 [99/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:09:31.449 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:09:31.449 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:09:31.717 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:09:31.717 [103/265] Linking static target lib/librte_mbuf.a 00:09:31.717 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:09:31.985 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:09:31.985 [106/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:09:32.247 [107/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:09:32.247 [108/265] Linking static target lib/librte_meter.a 00:09:32.247 [109/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:09:32.247 [110/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:09:32.247 [111/265] Linking static target lib/librte_net.a 00:09:32.247 [112/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:09:32.813 [113/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:09:32.813 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:09:32.813 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:09:32.813 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:09:33.070 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:09:33.070 [118/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:09:33.327 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:09:33.586 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:09:33.844 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:09:33.844 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:09:34.102 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:09:34.360 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:09:34.360 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:09:34.360 [126/265] Linking static target lib/librte_pci.a 00:09:34.360 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:09:34.360 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:09:34.360 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:09:34.360 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:09:34.360 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:09:34.360 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:09:34.360 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:09:34.360 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:09:34.618 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:09:34.618 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:09:34.618 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:09:34.618 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:09:34.618 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:09:34.618 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:09:34.618 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:09:34.618 [142/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:34.618 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:09:34.876 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:09:34.876 [145/265] Linking static target lib/librte_ethdev.a 00:09:34.876 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:09:35.134 [147/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:09:35.134 [148/265] Linking static target lib/librte_cmdline.a 00:09:35.393 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:09:35.393 [150/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:09:35.393 [151/265] Linking static target lib/librte_timer.a 00:09:35.393 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:09:35.393 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:09:35.650 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:09:35.650 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:09:35.650 [156/265] Linking static target lib/librte_compressdev.a 00:09:35.907 [157/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:09:35.907 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:09:36.165 [159/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:09:36.165 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:09:36.165 [161/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:09:36.424 [162/265] Linking static target lib/librte_hash.a 00:09:36.424 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:09:36.424 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:09:36.683 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:09:36.683 [166/265] Linking static target lib/librte_dmadev.a 00:09:36.683 [167/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:09:36.683 [168/265] Linking static target lib/librte_cryptodev.a 00:09:36.683 [169/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:36.683 [170/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:09:36.941 [171/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:09:36.941 [172/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:09:36.941 [173/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:09:36.941 [174/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:09:37.199 [175/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:37.457 [176/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:09:37.457 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:09:37.457 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:09:37.457 [179/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:09:37.457 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:09:37.738 [181/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:09:37.738 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:09:37.997 [183/265] Linking static target lib/librte_power.a 00:09:38.256 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:09:38.256 [185/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:09:38.256 [186/265] Linking static target lib/librte_reorder.a 00:09:38.256 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:09:38.256 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:09:38.256 [189/265] Linking static target lib/librte_security.a 00:09:38.256 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:09:38.822 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:09:38.822 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:09:39.078 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.078 [194/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.336 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:09:39.594 [196/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.594 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:09:39.852 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:09:39.852 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:09:39.852 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:09:39.852 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:09:39.852 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:09:40.109 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:09:40.366 [204/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:09:40.366 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:09:40.366 [206/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:09:40.366 [207/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:09:40.624 [208/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:09:40.624 [209/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:09:40.624 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:09:40.624 [211/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:40.624 [212/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:40.624 [213/265] Linking static target drivers/librte_bus_vdev.a 00:09:40.624 [214/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:09:40.624 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:40.881 [216/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:40.881 [217/265] Linking static target drivers/librte_bus_pci.a 00:09:40.881 [218/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:40.881 [219/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:09:40.881 [220/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:09:41.138 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:09:41.396 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:41.396 [223/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:41.396 [224/265] Linking static target drivers/librte_mempool_ring.a 00:09:41.396 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:41.653 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:09:41.653 [227/265] Linking static target lib/librte_vhost.a 00:09:42.586 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:09:42.844 [229/265] Linking target lib/librte_eal.so.24.0 00:09:42.844 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:09:42.844 [231/265] Linking target lib/librte_pci.so.24.0 00:09:42.844 [232/265] Linking target lib/librte_dmadev.so.24.0 00:09:42.844 [233/265] Linking target lib/librte_meter.so.24.0 00:09:42.844 [234/265] Linking target lib/librte_ring.so.24.0 00:09:43.101 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:09:43.101 [236/265] Linking target lib/librte_timer.so.24.0 00:09:43.101 [237/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:43.101 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:09:43.101 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:09:43.101 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:09:43.101 [241/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:09:43.101 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:09:43.101 [243/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:09:43.102 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:09:43.102 [245/265] Linking target lib/librte_mempool.so.24.0 00:09:43.102 [246/265] Linking target lib/librte_rcu.so.24.0 00:09:43.368 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:09:43.368 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:09:43.368 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:09:43.368 [250/265] Linking target lib/librte_mbuf.so.24.0 00:09:43.640 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:09:43.640 [252/265] Linking target lib/librte_reorder.so.24.0 00:09:43.640 [253/265] Linking target lib/librte_cryptodev.so.24.0 00:09:43.640 [254/265] Linking target lib/librte_net.so.24.0 00:09:43.640 [255/265] Linking target lib/librte_compressdev.so.24.0 00:09:43.640 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:09:43.640 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:09:43.899 [258/265] Linking target lib/librte_hash.so.24.0 00:09:43.899 [259/265] Linking target lib/librte_security.so.24.0 00:09:43.899 [260/265] Linking target lib/librte_cmdline.so.24.0 00:09:43.899 [261/265] Linking target lib/librte_ethdev.so.24.0 00:09:43.899 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:09:43.899 [263/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:09:44.158 [264/265] Linking target lib/librte_power.so.24.0 00:09:44.158 [265/265] Linking target lib/librte_vhost.so.24.0 00:09:44.158 INFO: autodetecting backend as ninja 00:09:44.158 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:09:45.538 CC lib/ut_mock/mock.o 00:09:45.538 CC lib/log/log_flags.o 00:09:45.538 CC lib/log/log.o 00:09:45.538 CC lib/log/log_deprecated.o 00:09:45.538 CC lib/ut/ut.o 00:09:45.538 LIB libspdk_ut_mock.a 00:09:45.538 SO libspdk_ut_mock.so.6.0 00:09:45.538 LIB libspdk_log.a 00:09:45.538 SYMLINK libspdk_ut_mock.so 00:09:45.539 SO libspdk_log.so.7.0 00:09:45.539 LIB libspdk_ut.a 00:09:45.539 SO libspdk_ut.so.2.0 00:09:45.539 SYMLINK libspdk_log.so 00:09:45.539 SYMLINK libspdk_ut.so 00:09:45.806 CC lib/dma/dma.o 00:09:45.806 CXX lib/trace_parser/trace.o 00:09:45.806 CC lib/util/base64.o 00:09:45.806 CC lib/util/cpuset.o 00:09:45.806 CC lib/util/bit_array.o 00:09:45.806 CC lib/ioat/ioat.o 00:09:45.806 CC lib/util/crc16.o 00:09:45.806 CC lib/util/crc32.o 00:09:45.806 CC lib/util/crc32c.o 00:09:46.065 CC lib/vfio_user/host/vfio_user_pci.o 00:09:46.065 CC lib/vfio_user/host/vfio_user.o 00:09:46.065 CC lib/util/crc32_ieee.o 00:09:46.065 CC lib/util/crc64.o 00:09:46.065 CC lib/util/dif.o 00:09:46.065 LIB libspdk_ioat.a 00:09:46.323 CC lib/util/fd.o 00:09:46.323 SO libspdk_ioat.so.7.0 00:09:46.323 CC lib/util/file.o 00:09:46.323 LIB libspdk_dma.a 00:09:46.323 SO libspdk_dma.so.4.0 00:09:46.323 CC lib/util/hexlify.o 00:09:46.323 SYMLINK libspdk_ioat.so 00:09:46.323 CC lib/util/iov.o 00:09:46.323 CC lib/util/math.o 00:09:46.323 CC lib/util/pipe.o 00:09:46.323 SYMLINK libspdk_dma.so 00:09:46.323 CC lib/util/strerror_tls.o 00:09:46.323 LIB libspdk_vfio_user.a 00:09:46.323 SO libspdk_vfio_user.so.5.0 00:09:46.323 CC lib/util/string.o 00:09:46.581 CC lib/util/uuid.o 00:09:46.581 CC lib/util/fd_group.o 00:09:46.581 SYMLINK libspdk_vfio_user.so 00:09:46.581 CC lib/util/xor.o 00:09:46.581 CC lib/util/zipf.o 00:09:46.840 LIB libspdk_util.a 00:09:46.840 SO libspdk_util.so.9.0 00:09:47.099 LIB libspdk_trace_parser.a 00:09:47.099 SO libspdk_trace_parser.so.5.0 00:09:47.099 SYMLINK libspdk_util.so 00:09:47.099 SYMLINK libspdk_trace_parser.so 00:09:47.357 CC lib/conf/conf.o 00:09:47.357 CC lib/json/json_parse.o 00:09:47.357 CC lib/json/json_util.o 00:09:47.357 CC lib/json/json_write.o 00:09:47.357 CC lib/idxd/idxd_user.o 00:09:47.357 CC lib/idxd/idxd.o 00:09:47.357 CC lib/rdma/common.o 00:09:47.357 CC lib/rdma/rdma_verbs.o 00:09:47.357 CC lib/env_dpdk/env.o 00:09:47.357 CC lib/vmd/vmd.o 00:09:47.615 CC lib/vmd/led.o 00:09:47.615 CC lib/env_dpdk/memory.o 00:09:47.615 CC lib/env_dpdk/pci.o 00:09:47.615 LIB libspdk_conf.a 00:09:47.615 SO libspdk_conf.so.6.0 00:09:47.615 LIB libspdk_rdma.a 00:09:47.873 CC lib/env_dpdk/init.o 00:09:47.873 CC lib/env_dpdk/threads.o 00:09:47.873 SO libspdk_rdma.so.6.0 00:09:47.873 SYMLINK libspdk_conf.so 00:09:47.873 CC lib/env_dpdk/pci_ioat.o 00:09:47.873 LIB libspdk_json.a 00:09:47.873 SO libspdk_json.so.6.0 00:09:47.873 SYMLINK libspdk_rdma.so 00:09:47.873 CC lib/env_dpdk/pci_virtio.o 00:09:47.873 CC lib/env_dpdk/pci_vmd.o 00:09:47.873 SYMLINK libspdk_json.so 00:09:47.873 CC lib/env_dpdk/pci_idxd.o 00:09:48.130 CC lib/env_dpdk/pci_event.o 00:09:48.130 CC lib/env_dpdk/sigbus_handler.o 00:09:48.130 CC lib/env_dpdk/pci_dpdk.o 00:09:48.130 LIB libspdk_idxd.a 00:09:48.130 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:48.130 SO libspdk_idxd.so.12.0 00:09:48.130 CC lib/jsonrpc/jsonrpc_server.o 00:09:48.130 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:48.130 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:48.130 SYMLINK libspdk_idxd.so 00:09:48.130 CC lib/jsonrpc/jsonrpc_client.o 00:09:48.388 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:48.388 LIB libspdk_vmd.a 00:09:48.388 SO libspdk_vmd.so.6.0 00:09:48.647 LIB libspdk_jsonrpc.a 00:09:48.647 SYMLINK libspdk_vmd.so 00:09:48.647 SO libspdk_jsonrpc.so.6.0 00:09:48.647 SYMLINK libspdk_jsonrpc.so 00:09:48.905 CC lib/rpc/rpc.o 00:09:48.905 LIB libspdk_env_dpdk.a 00:09:48.905 SO libspdk_env_dpdk.so.14.0 00:09:49.164 LIB libspdk_rpc.a 00:09:49.164 SYMLINK libspdk_env_dpdk.so 00:09:49.164 SO libspdk_rpc.so.6.0 00:09:49.164 SYMLINK libspdk_rpc.so 00:09:49.422 CC lib/trace/trace.o 00:09:49.422 CC lib/trace/trace_rpc.o 00:09:49.422 CC lib/trace/trace_flags.o 00:09:49.422 CC lib/notify/notify.o 00:09:49.422 CC lib/notify/notify_rpc.o 00:09:49.422 CC lib/keyring/keyring.o 00:09:49.422 CC lib/keyring/keyring_rpc.o 00:09:49.680 LIB libspdk_notify.a 00:09:49.680 SO libspdk_notify.so.6.0 00:09:49.680 SYMLINK libspdk_notify.so 00:09:49.938 LIB libspdk_trace.a 00:09:49.938 LIB libspdk_keyring.a 00:09:49.938 SO libspdk_trace.so.10.0 00:09:49.938 SO libspdk_keyring.so.1.0 00:09:49.938 SYMLINK libspdk_keyring.so 00:09:49.938 SYMLINK libspdk_trace.so 00:09:50.196 CC lib/sock/sock.o 00:09:50.196 CC lib/thread/thread.o 00:09:50.196 CC lib/sock/sock_rpc.o 00:09:50.196 CC lib/thread/iobuf.o 00:09:50.762 LIB libspdk_sock.a 00:09:50.762 SO libspdk_sock.so.9.0 00:09:50.762 SYMLINK libspdk_sock.so 00:09:51.021 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:51.021 CC lib/nvme/nvme_ctrlr.o 00:09:51.021 CC lib/nvme/nvme_fabric.o 00:09:51.021 CC lib/nvme/nvme_ns.o 00:09:51.021 CC lib/nvme/nvme_ns_cmd.o 00:09:51.021 CC lib/nvme/nvme_pcie.o 00:09:51.021 CC lib/nvme/nvme_pcie_common.o 00:09:51.021 CC lib/nvme/nvme_qpair.o 00:09:51.021 CC lib/nvme/nvme.o 00:09:51.955 LIB libspdk_thread.a 00:09:51.955 CC lib/nvme/nvme_quirks.o 00:09:51.955 SO libspdk_thread.so.10.0 00:09:51.955 SYMLINK libspdk_thread.so 00:09:51.955 CC lib/nvme/nvme_transport.o 00:09:51.955 CC lib/nvme/nvme_discovery.o 00:09:51.955 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:09:52.213 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:09:52.213 CC lib/nvme/nvme_tcp.o 00:09:52.213 CC lib/nvme/nvme_opal.o 00:09:52.214 CC lib/nvme/nvme_io_msg.o 00:09:52.472 CC lib/nvme/nvme_poll_group.o 00:09:52.472 CC lib/nvme/nvme_zns.o 00:09:52.730 CC lib/nvme/nvme_stubs.o 00:09:52.730 CC lib/nvme/nvme_auth.o 00:09:52.730 CC lib/nvme/nvme_cuse.o 00:09:52.730 CC lib/nvme/nvme_rdma.o 00:09:52.989 CC lib/accel/accel.o 00:09:53.248 CC lib/blob/blobstore.o 00:09:53.248 CC lib/accel/accel_rpc.o 00:09:53.248 CC lib/accel/accel_sw.o 00:09:53.507 CC lib/blob/request.o 00:09:53.507 CC lib/init/json_config.o 00:09:53.507 CC lib/virtio/virtio.o 00:09:53.507 CC lib/init/subsystem.o 00:09:53.766 CC lib/init/subsystem_rpc.o 00:09:53.766 CC lib/virtio/virtio_vhost_user.o 00:09:53.766 CC lib/init/rpc.o 00:09:53.766 CC lib/virtio/virtio_vfio_user.o 00:09:53.766 CC lib/blob/zeroes.o 00:09:53.766 CC lib/virtio/virtio_pci.o 00:09:54.024 CC lib/blob/blob_bs_dev.o 00:09:54.024 LIB libspdk_init.a 00:09:54.024 SO libspdk_init.so.5.0 00:09:54.282 SYMLINK libspdk_init.so 00:09:54.282 LIB libspdk_virtio.a 00:09:54.282 LIB libspdk_nvme.a 00:09:54.282 SO libspdk_virtio.so.7.0 00:09:54.282 CC lib/event/app.o 00:09:54.282 CC lib/event/reactor.o 00:09:54.282 CC lib/event/log_rpc.o 00:09:54.282 CC lib/event/app_rpc.o 00:09:54.282 CC lib/event/scheduler_static.o 00:09:54.540 SYMLINK libspdk_virtio.so 00:09:54.540 SO libspdk_nvme.so.13.0 00:09:54.830 LIB libspdk_accel.a 00:09:54.830 SO libspdk_accel.so.15.0 00:09:54.830 SYMLINK libspdk_accel.so 00:09:54.830 SYMLINK libspdk_nvme.so 00:09:55.088 LIB libspdk_event.a 00:09:55.088 SO libspdk_event.so.13.0 00:09:55.088 CC lib/bdev/bdev.o 00:09:55.088 CC lib/bdev/bdev_rpc.o 00:09:55.088 CC lib/bdev/bdev_zone.o 00:09:55.088 CC lib/bdev/part.o 00:09:55.088 CC lib/bdev/scsi_nvme.o 00:09:55.088 SYMLINK libspdk_event.so 00:09:56.989 LIB libspdk_blob.a 00:09:56.989 SO libspdk_blob.so.11.0 00:09:56.989 SYMLINK libspdk_blob.so 00:09:57.247 CC lib/lvol/lvol.o 00:09:57.247 CC lib/blobfs/blobfs.o 00:09:57.247 CC lib/blobfs/tree.o 00:09:57.814 LIB libspdk_bdev.a 00:09:58.073 SO libspdk_bdev.so.15.0 00:09:58.073 SYMLINK libspdk_bdev.so 00:09:58.073 LIB libspdk_blobfs.a 00:09:58.073 LIB libspdk_lvol.a 00:09:58.073 SO libspdk_blobfs.so.10.0 00:09:58.073 SO libspdk_lvol.so.10.0 00:09:58.331 SYMLINK libspdk_blobfs.so 00:09:58.331 SYMLINK libspdk_lvol.so 00:09:58.331 CC lib/scsi/dev.o 00:09:58.331 CC lib/scsi/lun.o 00:09:58.331 CC lib/scsi/port.o 00:09:58.331 CC lib/scsi/scsi_bdev.o 00:09:58.331 CC lib/scsi/scsi.o 00:09:58.331 CC lib/ublk/ublk.o 00:09:58.331 CC lib/scsi/scsi_pr.o 00:09:58.331 CC lib/ftl/ftl_core.o 00:09:58.331 CC lib/nvmf/ctrlr.o 00:09:58.331 CC lib/nbd/nbd.o 00:09:58.590 CC lib/nbd/nbd_rpc.o 00:09:58.590 CC lib/nvmf/ctrlr_discovery.o 00:09:58.590 CC lib/scsi/scsi_rpc.o 00:09:58.590 CC lib/scsi/task.o 00:09:58.590 CC lib/ftl/ftl_init.o 00:09:58.590 CC lib/ftl/ftl_layout.o 00:09:58.848 LIB libspdk_nbd.a 00:09:58.848 CC lib/ublk/ublk_rpc.o 00:09:58.848 SO libspdk_nbd.so.7.0 00:09:58.848 CC lib/nvmf/ctrlr_bdev.o 00:09:58.848 SYMLINK libspdk_nbd.so 00:09:58.848 CC lib/nvmf/subsystem.o 00:09:58.848 CC lib/nvmf/nvmf.o 00:09:58.848 CC lib/ftl/ftl_debug.o 00:09:58.848 CC lib/ftl/ftl_io.o 00:09:58.848 LIB libspdk_scsi.a 00:09:59.106 LIB libspdk_ublk.a 00:09:59.106 SO libspdk_ublk.so.3.0 00:09:59.106 CC lib/nvmf/nvmf_rpc.o 00:09:59.106 SO libspdk_scsi.so.9.0 00:09:59.106 CC lib/ftl/ftl_sb.o 00:09:59.106 SYMLINK libspdk_ublk.so 00:09:59.106 CC lib/ftl/ftl_l2p.o 00:09:59.106 SYMLINK libspdk_scsi.so 00:09:59.106 CC lib/nvmf/transport.o 00:09:59.106 CC lib/ftl/ftl_l2p_flat.o 00:09:59.364 CC lib/ftl/ftl_nv_cache.o 00:09:59.365 CC lib/ftl/ftl_band.o 00:09:59.365 CC lib/ftl/ftl_band_ops.o 00:09:59.365 CC lib/ftl/ftl_writer.o 00:09:59.624 CC lib/ftl/ftl_rq.o 00:09:59.624 CC lib/ftl/ftl_reloc.o 00:09:59.882 CC lib/ftl/ftl_l2p_cache.o 00:09:59.882 CC lib/ftl/ftl_p2l.o 00:09:59.882 CC lib/nvmf/tcp.o 00:09:59.882 CC lib/ftl/mngt/ftl_mngt.o 00:10:00.140 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:10:00.140 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:10:00.140 CC lib/ftl/mngt/ftl_mngt_startup.o 00:10:00.140 CC lib/nvmf/stubs.o 00:10:00.398 CC lib/ftl/mngt/ftl_mngt_md.o 00:10:00.398 CC lib/nvmf/mdns_server.o 00:10:00.398 CC lib/ftl/mngt/ftl_mngt_misc.o 00:10:00.398 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:10:00.398 CC lib/nvmf/rdma.o 00:10:00.398 CC lib/nvmf/auth.o 00:10:00.657 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:10:00.657 CC lib/ftl/mngt/ftl_mngt_band.o 00:10:00.657 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:10:00.657 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:10:00.955 CC lib/iscsi/conn.o 00:10:00.955 CC lib/iscsi/init_grp.o 00:10:00.955 CC lib/vhost/vhost.o 00:10:00.955 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:10:00.955 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:10:01.233 CC lib/iscsi/iscsi.o 00:10:01.233 CC lib/vhost/vhost_rpc.o 00:10:01.233 CC lib/vhost/vhost_scsi.o 00:10:01.233 CC lib/iscsi/md5.o 00:10:01.233 CC lib/ftl/utils/ftl_conf.o 00:10:01.490 CC lib/iscsi/param.o 00:10:01.490 CC lib/ftl/utils/ftl_md.o 00:10:01.490 CC lib/ftl/utils/ftl_mempool.o 00:10:01.783 CC lib/iscsi/portal_grp.o 00:10:01.783 CC lib/iscsi/tgt_node.o 00:10:01.783 CC lib/iscsi/iscsi_subsystem.o 00:10:02.039 CC lib/ftl/utils/ftl_bitmap.o 00:10:02.039 CC lib/iscsi/iscsi_rpc.o 00:10:02.039 CC lib/vhost/vhost_blk.o 00:10:02.039 CC lib/iscsi/task.o 00:10:02.039 CC lib/ftl/utils/ftl_property.o 00:10:02.295 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:10:02.295 CC lib/vhost/rte_vhost_user.o 00:10:02.295 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:10:02.551 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:10:02.551 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:10:02.551 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:10:02.551 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:10:02.551 LIB libspdk_iscsi.a 00:10:02.551 CC lib/ftl/upgrade/ftl_sb_v3.o 00:10:02.551 CC lib/ftl/upgrade/ftl_sb_v5.o 00:10:02.808 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:10:02.808 CC lib/ftl/nvc/ftl_nvc_dev.o 00:10:02.808 CC lib/ftl/base/ftl_base_dev.o 00:10:02.808 SO libspdk_iscsi.so.8.0 00:10:02.808 CC lib/ftl/base/ftl_base_bdev.o 00:10:02.808 LIB libspdk_nvmf.a 00:10:02.808 CC lib/ftl/ftl_trace.o 00:10:03.066 SO libspdk_nvmf.so.18.0 00:10:03.066 SYMLINK libspdk_iscsi.so 00:10:03.324 SYMLINK libspdk_nvmf.so 00:10:03.324 LIB libspdk_ftl.a 00:10:03.582 SO libspdk_ftl.so.9.0 00:10:03.841 LIB libspdk_vhost.a 00:10:03.841 SYMLINK libspdk_ftl.so 00:10:03.841 SO libspdk_vhost.so.8.0 00:10:04.098 SYMLINK libspdk_vhost.so 00:10:04.356 CC module/env_dpdk/env_dpdk_rpc.o 00:10:04.356 CC module/accel/error/accel_error.o 00:10:04.356 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:10:04.356 CC module/keyring/file/keyring.o 00:10:04.356 CC module/scheduler/dynamic/scheduler_dynamic.o 00:10:04.356 CC module/accel/dsa/accel_dsa.o 00:10:04.356 CC module/accel/iaa/accel_iaa.o 00:10:04.615 CC module/blob/bdev/blob_bdev.o 00:10:04.615 CC module/accel/ioat/accel_ioat.o 00:10:04.615 CC module/sock/posix/posix.o 00:10:04.615 LIB libspdk_env_dpdk_rpc.a 00:10:04.615 SO libspdk_env_dpdk_rpc.so.6.0 00:10:04.615 LIB libspdk_scheduler_dpdk_governor.a 00:10:04.615 SYMLINK libspdk_env_dpdk_rpc.so 00:10:04.615 CC module/accel/error/accel_error_rpc.o 00:10:04.615 SO libspdk_scheduler_dpdk_governor.so.4.0 00:10:04.615 CC module/accel/ioat/accel_ioat_rpc.o 00:10:04.615 CC module/keyring/file/keyring_rpc.o 00:10:04.873 SYMLINK libspdk_scheduler_dpdk_governor.so 00:10:04.873 CC module/accel/dsa/accel_dsa_rpc.o 00:10:04.873 CC module/accel/iaa/accel_iaa_rpc.o 00:10:04.873 LIB libspdk_scheduler_dynamic.a 00:10:04.873 SO libspdk_scheduler_dynamic.so.4.0 00:10:04.873 SYMLINK libspdk_scheduler_dynamic.so 00:10:04.873 LIB libspdk_accel_error.a 00:10:04.873 LIB libspdk_keyring_file.a 00:10:04.873 LIB libspdk_accel_ioat.a 00:10:04.873 LIB libspdk_blob_bdev.a 00:10:04.873 SO libspdk_accel_error.so.2.0 00:10:04.873 LIB libspdk_accel_iaa.a 00:10:04.873 SO libspdk_keyring_file.so.1.0 00:10:04.873 LIB libspdk_accel_dsa.a 00:10:04.873 SO libspdk_blob_bdev.so.11.0 00:10:04.873 CC module/scheduler/gscheduler/gscheduler.o 00:10:04.873 SO libspdk_accel_ioat.so.6.0 00:10:04.873 SO libspdk_accel_iaa.so.3.0 00:10:05.130 SYMLINK libspdk_accel_error.so 00:10:05.130 SYMLINK libspdk_keyring_file.so 00:10:05.130 SO libspdk_accel_dsa.so.5.0 00:10:05.130 SYMLINK libspdk_blob_bdev.so 00:10:05.130 SYMLINK libspdk_accel_ioat.so 00:10:05.130 SYMLINK libspdk_accel_iaa.so 00:10:05.130 SYMLINK libspdk_accel_dsa.so 00:10:05.130 LIB libspdk_scheduler_gscheduler.a 00:10:05.130 SO libspdk_scheduler_gscheduler.so.4.0 00:10:05.388 SYMLINK libspdk_scheduler_gscheduler.so 00:10:05.388 CC module/bdev/gpt/gpt.o 00:10:05.388 CC module/bdev/nvme/bdev_nvme.o 00:10:05.388 CC module/blobfs/bdev/blobfs_bdev.o 00:10:05.388 CC module/bdev/malloc/bdev_malloc.o 00:10:05.388 CC module/bdev/lvol/vbdev_lvol.o 00:10:05.388 CC module/bdev/null/bdev_null.o 00:10:05.388 CC module/bdev/delay/vbdev_delay.o 00:10:05.388 CC module/bdev/error/vbdev_error.o 00:10:05.647 CC module/bdev/passthru/vbdev_passthru.o 00:10:05.648 LIB libspdk_sock_posix.a 00:10:05.648 SO libspdk_sock_posix.so.6.0 00:10:05.648 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:10:05.648 SYMLINK libspdk_sock_posix.so 00:10:05.648 CC module/bdev/gpt/vbdev_gpt.o 00:10:05.648 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:10:05.648 CC module/bdev/error/vbdev_error_rpc.o 00:10:05.906 CC module/bdev/null/bdev_null_rpc.o 00:10:05.906 CC module/bdev/delay/vbdev_delay_rpc.o 00:10:05.906 LIB libspdk_blobfs_bdev.a 00:10:05.906 SO libspdk_blobfs_bdev.so.6.0 00:10:05.906 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:10:05.906 CC module/bdev/malloc/bdev_malloc_rpc.o 00:10:05.906 LIB libspdk_bdev_passthru.a 00:10:05.906 CC module/bdev/nvme/bdev_nvme_rpc.o 00:10:05.906 SYMLINK libspdk_blobfs_bdev.so 00:10:05.906 CC module/bdev/nvme/nvme_rpc.o 00:10:05.906 LIB libspdk_bdev_error.a 00:10:05.906 LIB libspdk_bdev_null.a 00:10:05.906 SO libspdk_bdev_passthru.so.6.0 00:10:05.906 SO libspdk_bdev_error.so.6.0 00:10:05.906 LIB libspdk_bdev_delay.a 00:10:05.906 LIB libspdk_bdev_gpt.a 00:10:06.164 SO libspdk_bdev_null.so.6.0 00:10:06.164 SO libspdk_bdev_delay.so.6.0 00:10:06.164 SYMLINK libspdk_bdev_passthru.so 00:10:06.164 LIB libspdk_bdev_malloc.a 00:10:06.164 SO libspdk_bdev_gpt.so.6.0 00:10:06.164 CC module/bdev/nvme/bdev_mdns_client.o 00:10:06.164 SYMLINK libspdk_bdev_delay.so 00:10:06.164 CC module/bdev/nvme/vbdev_opal.o 00:10:06.164 SO libspdk_bdev_malloc.so.6.0 00:10:06.164 SYMLINK libspdk_bdev_null.so 00:10:06.164 SYMLINK libspdk_bdev_error.so 00:10:06.164 SYMLINK libspdk_bdev_gpt.so 00:10:06.164 SYMLINK libspdk_bdev_malloc.so 00:10:06.164 CC module/bdev/nvme/vbdev_opal_rpc.o 00:10:06.421 CC module/bdev/raid/bdev_raid.o 00:10:06.421 CC module/bdev/split/vbdev_split.o 00:10:06.421 CC module/bdev/zone_block/vbdev_zone_block.o 00:10:06.421 LIB libspdk_bdev_lvol.a 00:10:06.421 SO libspdk_bdev_lvol.so.6.0 00:10:06.421 CC module/bdev/aio/bdev_aio.o 00:10:06.421 CC module/bdev/aio/bdev_aio_rpc.o 00:10:06.421 SYMLINK libspdk_bdev_lvol.so 00:10:06.421 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:10:06.679 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:10:06.679 CC module/bdev/split/vbdev_split_rpc.o 00:10:06.679 CC module/bdev/raid/bdev_raid_rpc.o 00:10:06.679 CC module/bdev/raid/bdev_raid_sb.o 00:10:06.679 CC module/bdev/ftl/bdev_ftl.o 00:10:06.938 LIB libspdk_bdev_aio.a 00:10:06.938 LIB libspdk_bdev_zone_block.a 00:10:06.938 SO libspdk_bdev_zone_block.so.6.0 00:10:06.938 SO libspdk_bdev_aio.so.6.0 00:10:06.938 CC module/bdev/iscsi/bdev_iscsi.o 00:10:06.938 CC module/bdev/raid/raid0.o 00:10:06.938 LIB libspdk_bdev_split.a 00:10:06.938 SYMLINK libspdk_bdev_zone_block.so 00:10:06.938 CC module/bdev/raid/raid1.o 00:10:06.938 SO libspdk_bdev_split.so.6.0 00:10:06.938 SYMLINK libspdk_bdev_aio.so 00:10:06.938 CC module/bdev/ftl/bdev_ftl_rpc.o 00:10:06.938 CC module/bdev/raid/concat.o 00:10:07.196 CC module/bdev/virtio/bdev_virtio_scsi.o 00:10:07.196 CC module/bdev/virtio/bdev_virtio_blk.o 00:10:07.196 SYMLINK libspdk_bdev_split.so 00:10:07.196 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:10:07.196 CC module/bdev/virtio/bdev_virtio_rpc.o 00:10:07.196 LIB libspdk_bdev_ftl.a 00:10:07.455 LIB libspdk_bdev_iscsi.a 00:10:07.455 SO libspdk_bdev_ftl.so.6.0 00:10:07.455 SO libspdk_bdev_iscsi.so.6.0 00:10:07.455 LIB libspdk_bdev_raid.a 00:10:07.455 SYMLINK libspdk_bdev_ftl.so 00:10:07.455 SYMLINK libspdk_bdev_iscsi.so 00:10:07.455 SO libspdk_bdev_raid.so.6.0 00:10:07.455 SYMLINK libspdk_bdev_raid.so 00:10:07.712 LIB libspdk_bdev_virtio.a 00:10:07.712 SO libspdk_bdev_virtio.so.6.0 00:10:07.712 SYMLINK libspdk_bdev_virtio.so 00:10:07.970 LIB libspdk_bdev_nvme.a 00:10:07.970 SO libspdk_bdev_nvme.so.7.0 00:10:08.229 SYMLINK libspdk_bdev_nvme.so 00:10:08.795 CC module/event/subsystems/vmd/vmd.o 00:10:08.795 CC module/event/subsystems/iobuf/iobuf.o 00:10:08.795 CC module/event/subsystems/vmd/vmd_rpc.o 00:10:08.795 CC module/event/subsystems/scheduler/scheduler.o 00:10:08.795 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:10:08.795 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:10:08.795 CC module/event/subsystems/keyring/keyring.o 00:10:08.795 CC module/event/subsystems/sock/sock.o 00:10:08.795 LIB libspdk_event_vhost_blk.a 00:10:08.795 LIB libspdk_event_vmd.a 00:10:08.795 LIB libspdk_event_keyring.a 00:10:08.795 SO libspdk_event_vhost_blk.so.3.0 00:10:08.795 LIB libspdk_event_sock.a 00:10:08.795 SO libspdk_event_vmd.so.6.0 00:10:08.795 LIB libspdk_event_scheduler.a 00:10:08.795 LIB libspdk_event_iobuf.a 00:10:08.795 SO libspdk_event_keyring.so.1.0 00:10:08.795 SO libspdk_event_sock.so.5.0 00:10:08.795 SO libspdk_event_scheduler.so.4.0 00:10:08.795 SYMLINK libspdk_event_vmd.so 00:10:08.795 SYMLINK libspdk_event_vhost_blk.so 00:10:09.053 SO libspdk_event_iobuf.so.3.0 00:10:09.053 SYMLINK libspdk_event_sock.so 00:10:09.053 SYMLINK libspdk_event_keyring.so 00:10:09.053 SYMLINK libspdk_event_scheduler.so 00:10:09.053 SYMLINK libspdk_event_iobuf.so 00:10:09.311 CC module/event/subsystems/accel/accel.o 00:10:09.311 LIB libspdk_event_accel.a 00:10:09.311 SO libspdk_event_accel.so.6.0 00:10:09.569 SYMLINK libspdk_event_accel.so 00:10:09.827 CC module/event/subsystems/bdev/bdev.o 00:10:09.827 LIB libspdk_event_bdev.a 00:10:09.827 SO libspdk_event_bdev.so.6.0 00:10:10.086 SYMLINK libspdk_event_bdev.so 00:10:10.343 CC module/event/subsystems/ublk/ublk.o 00:10:10.343 CC module/event/subsystems/nbd/nbd.o 00:10:10.343 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:10:10.343 CC module/event/subsystems/scsi/scsi.o 00:10:10.343 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:10:10.343 LIB libspdk_event_nbd.a 00:10:10.343 LIB libspdk_event_ublk.a 00:10:10.343 SO libspdk_event_nbd.so.6.0 00:10:10.343 SO libspdk_event_ublk.so.3.0 00:10:10.601 LIB libspdk_event_scsi.a 00:10:10.601 SYMLINK libspdk_event_ublk.so 00:10:10.601 SO libspdk_event_scsi.so.6.0 00:10:10.601 SYMLINK libspdk_event_nbd.so 00:10:10.601 LIB libspdk_event_nvmf.a 00:10:10.601 SO libspdk_event_nvmf.so.6.0 00:10:10.601 SYMLINK libspdk_event_scsi.so 00:10:10.601 SYMLINK libspdk_event_nvmf.so 00:10:10.859 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:10:10.859 CC module/event/subsystems/iscsi/iscsi.o 00:10:11.117 LIB libspdk_event_vhost_scsi.a 00:10:11.117 LIB libspdk_event_iscsi.a 00:10:11.117 SO libspdk_event_vhost_scsi.so.3.0 00:10:11.117 SO libspdk_event_iscsi.so.6.0 00:10:11.117 SYMLINK libspdk_event_vhost_scsi.so 00:10:11.117 SYMLINK libspdk_event_iscsi.so 00:10:11.375 SO libspdk.so.6.0 00:10:11.375 SYMLINK libspdk.so 00:10:11.633 TEST_HEADER include/spdk/accel.h 00:10:11.633 TEST_HEADER include/spdk/accel_module.h 00:10:11.633 CXX app/trace/trace.o 00:10:11.633 TEST_HEADER include/spdk/assert.h 00:10:11.633 TEST_HEADER include/spdk/barrier.h 00:10:11.633 TEST_HEADER include/spdk/base64.h 00:10:11.633 TEST_HEADER include/spdk/bdev.h 00:10:11.633 TEST_HEADER include/spdk/bdev_module.h 00:10:11.633 TEST_HEADER include/spdk/bdev_zone.h 00:10:11.633 TEST_HEADER include/spdk/bit_array.h 00:10:11.633 CC app/trace_record/trace_record.o 00:10:11.633 TEST_HEADER include/spdk/bit_pool.h 00:10:11.633 TEST_HEADER include/spdk/blob_bdev.h 00:10:11.633 TEST_HEADER include/spdk/blobfs_bdev.h 00:10:11.633 TEST_HEADER include/spdk/blobfs.h 00:10:11.633 TEST_HEADER include/spdk/blob.h 00:10:11.633 TEST_HEADER include/spdk/conf.h 00:10:11.633 TEST_HEADER include/spdk/config.h 00:10:11.633 TEST_HEADER include/spdk/cpuset.h 00:10:11.633 TEST_HEADER include/spdk/crc16.h 00:10:11.633 TEST_HEADER include/spdk/crc32.h 00:10:11.633 TEST_HEADER include/spdk/crc64.h 00:10:11.633 TEST_HEADER include/spdk/dif.h 00:10:11.633 TEST_HEADER include/spdk/dma.h 00:10:11.633 TEST_HEADER include/spdk/endian.h 00:10:11.633 TEST_HEADER include/spdk/env_dpdk.h 00:10:11.633 TEST_HEADER include/spdk/env.h 00:10:11.633 TEST_HEADER include/spdk/event.h 00:10:11.633 TEST_HEADER include/spdk/fd_group.h 00:10:11.633 TEST_HEADER include/spdk/fd.h 00:10:11.633 TEST_HEADER include/spdk/file.h 00:10:11.633 TEST_HEADER include/spdk/ftl.h 00:10:11.633 TEST_HEADER include/spdk/gpt_spec.h 00:10:11.633 TEST_HEADER include/spdk/hexlify.h 00:10:11.633 TEST_HEADER include/spdk/histogram_data.h 00:10:11.633 TEST_HEADER include/spdk/idxd.h 00:10:11.633 TEST_HEADER include/spdk/idxd_spec.h 00:10:11.633 TEST_HEADER include/spdk/init.h 00:10:11.633 TEST_HEADER include/spdk/ioat.h 00:10:11.633 TEST_HEADER include/spdk/ioat_spec.h 00:10:11.633 TEST_HEADER include/spdk/iscsi_spec.h 00:10:11.633 TEST_HEADER include/spdk/json.h 00:10:11.633 TEST_HEADER include/spdk/jsonrpc.h 00:10:11.633 CC examples/accel/perf/accel_perf.o 00:10:11.633 TEST_HEADER include/spdk/keyring.h 00:10:11.633 TEST_HEADER include/spdk/keyring_module.h 00:10:11.633 TEST_HEADER include/spdk/likely.h 00:10:11.633 TEST_HEADER include/spdk/log.h 00:10:11.633 TEST_HEADER include/spdk/lvol.h 00:10:11.633 TEST_HEADER include/spdk/memory.h 00:10:11.633 TEST_HEADER include/spdk/mmio.h 00:10:11.633 TEST_HEADER include/spdk/nbd.h 00:10:11.633 TEST_HEADER include/spdk/notify.h 00:10:11.633 TEST_HEADER include/spdk/nvme.h 00:10:11.633 TEST_HEADER include/spdk/nvme_intel.h 00:10:11.633 TEST_HEADER include/spdk/nvme_ocssd.h 00:10:11.633 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:10:11.633 TEST_HEADER include/spdk/nvme_spec.h 00:10:11.633 CC test/dma/test_dma/test_dma.o 00:10:11.633 TEST_HEADER include/spdk/nvme_zns.h 00:10:11.633 TEST_HEADER include/spdk/nvmf_cmd.h 00:10:11.633 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:10:11.633 TEST_HEADER include/spdk/nvmf.h 00:10:11.633 TEST_HEADER include/spdk/nvmf_spec.h 00:10:11.633 TEST_HEADER include/spdk/nvmf_transport.h 00:10:11.633 CC test/bdev/bdevio/bdevio.o 00:10:11.633 TEST_HEADER include/spdk/opal.h 00:10:11.633 TEST_HEADER include/spdk/opal_spec.h 00:10:11.633 TEST_HEADER include/spdk/pci_ids.h 00:10:11.633 TEST_HEADER include/spdk/pipe.h 00:10:11.633 CC test/accel/dif/dif.o 00:10:11.633 TEST_HEADER include/spdk/queue.h 00:10:11.633 TEST_HEADER include/spdk/reduce.h 00:10:11.633 TEST_HEADER include/spdk/rpc.h 00:10:11.633 TEST_HEADER include/spdk/scheduler.h 00:10:11.633 TEST_HEADER include/spdk/scsi.h 00:10:11.633 TEST_HEADER include/spdk/scsi_spec.h 00:10:11.633 TEST_HEADER include/spdk/sock.h 00:10:11.633 TEST_HEADER include/spdk/stdinc.h 00:10:11.633 TEST_HEADER include/spdk/string.h 00:10:11.633 TEST_HEADER include/spdk/thread.h 00:10:11.633 TEST_HEADER include/spdk/trace.h 00:10:11.633 TEST_HEADER include/spdk/trace_parser.h 00:10:11.633 CC test/blobfs/mkfs/mkfs.o 00:10:11.633 TEST_HEADER include/spdk/tree.h 00:10:11.633 TEST_HEADER include/spdk/ublk.h 00:10:11.633 TEST_HEADER include/spdk/util.h 00:10:11.891 TEST_HEADER include/spdk/uuid.h 00:10:11.891 CC test/env/mem_callbacks/mem_callbacks.o 00:10:11.891 TEST_HEADER include/spdk/version.h 00:10:11.891 TEST_HEADER include/spdk/vfio_user_pci.h 00:10:11.891 TEST_HEADER include/spdk/vfio_user_spec.h 00:10:11.891 TEST_HEADER include/spdk/vhost.h 00:10:11.891 TEST_HEADER include/spdk/vmd.h 00:10:11.891 TEST_HEADER include/spdk/xor.h 00:10:11.891 CC test/app/bdev_svc/bdev_svc.o 00:10:11.891 TEST_HEADER include/spdk/zipf.h 00:10:11.891 CXX test/cpp_headers/accel.o 00:10:11.891 CXX test/cpp_headers/accel_module.o 00:10:11.891 LINK spdk_trace_record 00:10:12.153 LINK spdk_trace 00:10:12.153 LINK mkfs 00:10:12.153 LINK bdev_svc 00:10:12.153 LINK dif 00:10:12.153 LINK test_dma 00:10:12.153 CXX test/cpp_headers/assert.o 00:10:12.153 LINK accel_perf 00:10:12.153 LINK bdevio 00:10:12.429 CC app/nvmf_tgt/nvmf_main.o 00:10:12.429 CXX test/cpp_headers/barrier.o 00:10:12.429 CC test/app/histogram_perf/histogram_perf.o 00:10:12.429 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:10:12.429 LINK mem_callbacks 00:10:12.429 CC app/iscsi_tgt/iscsi_tgt.o 00:10:12.429 CC test/app/jsoncat/jsoncat.o 00:10:12.429 LINK nvmf_tgt 00:10:12.687 CXX test/cpp_headers/base64.o 00:10:12.687 LINK histogram_perf 00:10:12.687 CC examples/ioat/perf/perf.o 00:10:12.687 CC examples/blob/hello_world/hello_blob.o 00:10:12.687 CC examples/bdev/hello_world/hello_bdev.o 00:10:12.687 LINK jsoncat 00:10:12.687 CC test/env/vtophys/vtophys.o 00:10:12.687 LINK iscsi_tgt 00:10:12.687 CXX test/cpp_headers/bdev.o 00:10:12.945 LINK nvme_fuzz 00:10:12.945 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:10:12.945 LINK ioat_perf 00:10:12.945 LINK vtophys 00:10:12.945 CC examples/blob/cli/blobcli.o 00:10:12.945 LINK hello_blob 00:10:12.945 CC test/env/memory/memory_ut.o 00:10:12.945 CXX test/cpp_headers/bdev_module.o 00:10:12.945 LINK hello_bdev 00:10:12.945 LINK env_dpdk_post_init 00:10:13.203 CC examples/ioat/verify/verify.o 00:10:13.203 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:10:13.203 CC app/spdk_tgt/spdk_tgt.o 00:10:13.203 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:10:13.203 CXX test/cpp_headers/bdev_zone.o 00:10:13.203 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:10:13.203 CC examples/nvme/hello_world/hello_world.o 00:10:13.461 CC examples/bdev/bdevperf/bdevperf.o 00:10:13.461 LINK verify 00:10:13.461 LINK spdk_tgt 00:10:13.461 LINK blobcli 00:10:13.461 CXX test/cpp_headers/bit_array.o 00:10:13.461 CC test/app/stub/stub.o 00:10:13.461 LINK hello_world 00:10:13.462 CXX test/cpp_headers/bit_pool.o 00:10:13.719 LINK stub 00:10:13.719 CC app/spdk_lspci/spdk_lspci.o 00:10:13.719 CC test/event/event_perf/event_perf.o 00:10:13.719 LINK vhost_fuzz 00:10:13.719 CXX test/cpp_headers/blob_bdev.o 00:10:13.719 CC examples/nvme/reconnect/reconnect.o 00:10:13.977 LINK spdk_lspci 00:10:13.977 CC examples/sock/hello_world/hello_sock.o 00:10:13.977 LINK memory_ut 00:10:13.977 LINK event_perf 00:10:13.977 CXX test/cpp_headers/blobfs_bdev.o 00:10:14.234 CC test/nvme/aer/aer.o 00:10:14.234 CC test/lvol/esnap/esnap.o 00:10:14.234 LINK bdevperf 00:10:14.234 CC app/spdk_nvme_perf/perf.o 00:10:14.234 LINK hello_sock 00:10:14.234 CC test/event/reactor/reactor.o 00:10:14.234 LINK reconnect 00:10:14.234 CXX test/cpp_headers/blobfs.o 00:10:14.234 CC test/env/pci/pci_ut.o 00:10:14.492 LINK reactor 00:10:14.492 LINK aer 00:10:14.492 CXX test/cpp_headers/blob.o 00:10:14.492 CC app/spdk_nvme_identify/identify.o 00:10:14.492 CC examples/nvme/nvme_manage/nvme_manage.o 00:10:14.492 CC test/rpc_client/rpc_client_test.o 00:10:14.492 CXX test/cpp_headers/conf.o 00:10:14.749 CC test/event/reactor_perf/reactor_perf.o 00:10:14.749 LINK pci_ut 00:10:14.749 CC test/nvme/reset/reset.o 00:10:14.749 LINK rpc_client_test 00:10:14.749 CXX test/cpp_headers/config.o 00:10:14.749 LINK reactor_perf 00:10:14.749 LINK iscsi_fuzz 00:10:14.749 CXX test/cpp_headers/cpuset.o 00:10:15.008 CXX test/cpp_headers/crc16.o 00:10:15.008 LINK reset 00:10:15.008 CXX test/cpp_headers/crc32.o 00:10:15.008 LINK spdk_nvme_perf 00:10:15.008 CC test/event/app_repeat/app_repeat.o 00:10:15.008 CC examples/nvme/hotplug/hotplug.o 00:10:15.008 LINK nvme_manage 00:10:15.008 CC examples/nvme/arbitration/arbitration.o 00:10:15.265 CC examples/nvme/cmb_copy/cmb_copy.o 00:10:15.265 LINK app_repeat 00:10:15.265 CXX test/cpp_headers/crc64.o 00:10:15.265 CC test/nvme/sgl/sgl.o 00:10:15.265 CC test/nvme/e2edp/nvme_dp.o 00:10:15.265 LINK hotplug 00:10:15.265 LINK spdk_nvme_identify 00:10:15.524 LINK cmb_copy 00:10:15.524 CC app/spdk_nvme_discover/discovery_aer.o 00:10:15.524 LINK arbitration 00:10:15.524 CXX test/cpp_headers/dif.o 00:10:15.524 CC test/event/scheduler/scheduler.o 00:10:15.783 LINK nvme_dp 00:10:15.783 CC examples/nvme/abort/abort.o 00:10:15.783 LINK sgl 00:10:15.783 CXX test/cpp_headers/dma.o 00:10:15.783 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:10:15.783 LINK spdk_nvme_discover 00:10:15.783 CXX test/cpp_headers/endian.o 00:10:15.783 CXX test/cpp_headers/env_dpdk.o 00:10:15.783 CC examples/vmd/lsvmd/lsvmd.o 00:10:16.040 LINK pmr_persistence 00:10:16.040 LINK scheduler 00:10:16.040 CC test/nvme/overhead/overhead.o 00:10:16.040 CC app/spdk_top/spdk_top.o 00:10:16.040 LINK lsvmd 00:10:16.040 CXX test/cpp_headers/env.o 00:10:16.040 CC examples/vmd/led/led.o 00:10:16.040 CC examples/nvmf/nvmf/nvmf.o 00:10:16.299 LINK abort 00:10:16.299 LINK led 00:10:16.299 CXX test/cpp_headers/event.o 00:10:16.299 LINK overhead 00:10:16.299 CC examples/util/zipf/zipf.o 00:10:16.299 CXX test/cpp_headers/fd_group.o 00:10:16.558 CC examples/thread/thread/thread_ex.o 00:10:16.558 CC examples/idxd/perf/perf.o 00:10:16.558 LINK nvmf 00:10:16.558 CXX test/cpp_headers/fd.o 00:10:16.558 LINK zipf 00:10:16.558 CC test/nvme/err_injection/err_injection.o 00:10:16.558 CC test/nvme/startup/startup.o 00:10:16.558 CC test/nvme/reserve/reserve.o 00:10:16.558 CXX test/cpp_headers/file.o 00:10:16.816 LINK thread 00:10:16.816 LINK err_injection 00:10:16.816 LINK idxd_perf 00:10:16.816 CC test/nvme/simple_copy/simple_copy.o 00:10:16.816 LINK startup 00:10:16.816 CXX test/cpp_headers/ftl.o 00:10:16.816 LINK reserve 00:10:17.074 CXX test/cpp_headers/gpt_spec.o 00:10:17.074 LINK spdk_top 00:10:17.074 CC test/nvme/connect_stress/connect_stress.o 00:10:17.074 CXX test/cpp_headers/hexlify.o 00:10:17.074 LINK simple_copy 00:10:17.074 CC test/thread/poller_perf/poller_perf.o 00:10:17.332 CC test/nvme/boot_partition/boot_partition.o 00:10:17.332 CC examples/interrupt_tgt/interrupt_tgt.o 00:10:17.332 CXX test/cpp_headers/histogram_data.o 00:10:17.332 CC test/nvme/compliance/nvme_compliance.o 00:10:17.332 LINK connect_stress 00:10:17.332 LINK poller_perf 00:10:17.332 CC test/nvme/fused_ordering/fused_ordering.o 00:10:17.332 CC app/vhost/vhost.o 00:10:17.332 LINK boot_partition 00:10:17.332 LINK interrupt_tgt 00:10:17.590 CXX test/cpp_headers/idxd.o 00:10:17.590 CXX test/cpp_headers/idxd_spec.o 00:10:17.590 CC test/nvme/doorbell_aers/doorbell_aers.o 00:10:17.590 LINK vhost 00:10:17.590 LINK fused_ordering 00:10:17.590 CXX test/cpp_headers/init.o 00:10:17.590 LINK nvme_compliance 00:10:17.590 CXX test/cpp_headers/ioat.o 00:10:17.849 CXX test/cpp_headers/ioat_spec.o 00:10:17.849 CC app/spdk_dd/spdk_dd.o 00:10:17.849 LINK doorbell_aers 00:10:17.849 CXX test/cpp_headers/iscsi_spec.o 00:10:17.849 CXX test/cpp_headers/json.o 00:10:17.849 CC test/nvme/fdp/fdp.o 00:10:17.849 CXX test/cpp_headers/jsonrpc.o 00:10:17.849 CC test/nvme/cuse/cuse.o 00:10:18.107 CXX test/cpp_headers/keyring.o 00:10:18.107 CXX test/cpp_headers/keyring_module.o 00:10:18.107 CC app/fio/nvme/fio_plugin.o 00:10:18.107 CXX test/cpp_headers/likely.o 00:10:18.365 LINK spdk_dd 00:10:18.365 CXX test/cpp_headers/log.o 00:10:18.365 CC app/fio/bdev/fio_plugin.o 00:10:18.365 LINK fdp 00:10:18.365 CXX test/cpp_headers/lvol.o 00:10:18.365 CXX test/cpp_headers/memory.o 00:10:18.365 CXX test/cpp_headers/mmio.o 00:10:18.623 CXX test/cpp_headers/nbd.o 00:10:18.623 CXX test/cpp_headers/notify.o 00:10:18.623 CXX test/cpp_headers/nvme.o 00:10:18.623 CXX test/cpp_headers/nvme_intel.o 00:10:18.623 CXX test/cpp_headers/nvme_ocssd.o 00:10:18.623 CXX test/cpp_headers/nvme_ocssd_spec.o 00:10:18.881 CXX test/cpp_headers/nvme_spec.o 00:10:18.881 CXX test/cpp_headers/nvme_zns.o 00:10:18.881 CXX test/cpp_headers/nvmf_cmd.o 00:10:18.881 LINK spdk_nvme 00:10:18.881 CXX test/cpp_headers/nvmf_fc_spec.o 00:10:18.881 CXX test/cpp_headers/nvmf.o 00:10:18.881 CXX test/cpp_headers/nvmf_spec.o 00:10:18.881 LINK spdk_bdev 00:10:18.881 CXX test/cpp_headers/nvmf_transport.o 00:10:18.881 CXX test/cpp_headers/opal.o 00:10:19.139 CXX test/cpp_headers/opal_spec.o 00:10:19.139 CXX test/cpp_headers/pci_ids.o 00:10:19.139 CXX test/cpp_headers/pipe.o 00:10:19.139 CXX test/cpp_headers/queue.o 00:10:19.139 CXX test/cpp_headers/reduce.o 00:10:19.139 CXX test/cpp_headers/rpc.o 00:10:19.139 CXX test/cpp_headers/scheduler.o 00:10:19.139 CXX test/cpp_headers/scsi.o 00:10:19.139 CXX test/cpp_headers/scsi_spec.o 00:10:19.397 CXX test/cpp_headers/sock.o 00:10:19.397 CXX test/cpp_headers/stdinc.o 00:10:19.397 CXX test/cpp_headers/string.o 00:10:19.397 LINK cuse 00:10:19.397 LINK esnap 00:10:19.397 CXX test/cpp_headers/thread.o 00:10:19.397 CXX test/cpp_headers/trace.o 00:10:19.397 CXX test/cpp_headers/trace_parser.o 00:10:19.397 CXX test/cpp_headers/tree.o 00:10:19.397 CXX test/cpp_headers/ublk.o 00:10:19.655 CXX test/cpp_headers/util.o 00:10:19.655 CXX test/cpp_headers/uuid.o 00:10:19.655 CXX test/cpp_headers/version.o 00:10:19.655 CXX test/cpp_headers/vfio_user_pci.o 00:10:19.655 CXX test/cpp_headers/vfio_user_spec.o 00:10:19.655 CXX test/cpp_headers/vhost.o 00:10:19.655 CXX test/cpp_headers/vmd.o 00:10:19.655 CXX test/cpp_headers/xor.o 00:10:19.655 CXX test/cpp_headers/zipf.o 00:10:23.892 00:10:23.892 real 1m13.337s 00:10:23.892 user 7m43.189s 00:10:23.892 sys 1m39.725s 00:10:23.892 02:11:11 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:10:23.892 ************************************ 00:10:23.892 END TEST make 00:10:23.892 ************************************ 00:10:23.892 02:11:11 make -- common/autotest_common.sh@10 -- $ set +x 00:10:24.152 02:11:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:10:24.152 02:11:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:24.152 02:11:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:24.152 02:11:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:24.152 02:11:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:10:24.152 02:11:11 -- pm/common@44 -- $ pid=5182 00:10:24.152 02:11:11 -- pm/common@50 -- $ kill -TERM 5182 00:10:24.152 02:11:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:24.152 02:11:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:10:24.152 02:11:11 -- pm/common@44 -- $ pid=5184 00:10:24.152 02:11:11 -- pm/common@50 -- $ kill -TERM 5184 00:10:24.152 02:11:12 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:24.152 02:11:12 -- nvmf/common.sh@7 -- # uname -s 00:10:24.152 02:11:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.152 02:11:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.152 02:11:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.152 02:11:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.152 02:11:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.152 02:11:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.152 02:11:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.152 02:11:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.152 02:11:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.152 02:11:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.152 02:11:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:10:24.152 02:11:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:10:24.152 02:11:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.152 02:11:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.152 02:11:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:24.152 02:11:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.152 02:11:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:24.152 02:11:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.152 02:11:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.152 02:11:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.152 02:11:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.152 02:11:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.152 02:11:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.152 02:11:12 -- paths/export.sh@5 -- # export PATH 00:10:24.152 02:11:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.152 02:11:12 -- nvmf/common.sh@47 -- # : 0 00:10:24.152 02:11:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:24.152 02:11:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:24.152 02:11:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.152 02:11:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.152 02:11:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.152 02:11:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:24.152 02:11:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:24.152 02:11:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:24.152 02:11:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:10:24.152 02:11:12 -- spdk/autotest.sh@32 -- # uname -s 00:10:24.152 02:11:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:10:24.152 02:11:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:10:24.152 02:11:12 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:24.152 02:11:12 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:10:24.152 02:11:12 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:24.152 02:11:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:10:24.152 02:11:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:10:24.152 02:11:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:10:24.152 02:11:12 -- spdk/autotest.sh@48 -- # udevadm_pid=53998 00:10:24.152 02:11:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:10:24.152 02:11:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:10:24.152 02:11:12 -- pm/common@17 -- # local monitor 00:10:24.152 02:11:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:24.152 02:11:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:24.152 02:11:12 -- pm/common@25 -- # sleep 1 00:10:24.152 02:11:12 -- pm/common@21 -- # date +%s 00:10:24.152 02:11:12 -- pm/common@21 -- # date +%s 00:10:24.152 02:11:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715739072 00:10:24.152 02:11:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715739072 00:10:24.152 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715739072_collect-vmstat.pm.log 00:10:24.152 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715739072_collect-cpu-load.pm.log 00:10:25.526 02:11:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:10:25.526 02:11:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:10:25.526 02:11:13 -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:25.526 02:11:13 -- common/autotest_common.sh@10 -- # set +x 00:10:25.526 02:11:13 -- spdk/autotest.sh@59 -- # create_test_list 00:10:25.526 02:11:13 -- common/autotest_common.sh@744 -- # xtrace_disable 00:10:25.526 02:11:13 -- common/autotest_common.sh@10 -- # set +x 00:10:25.526 02:11:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:10:25.526 02:11:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:10:25.526 02:11:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:10:25.526 02:11:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:10:25.526 02:11:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:10:25.526 02:11:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:10:25.526 02:11:13 -- common/autotest_common.sh@1451 -- # uname 00:10:25.526 02:11:13 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:10:25.526 02:11:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:10:25.526 02:11:13 -- common/autotest_common.sh@1471 -- # uname 00:10:25.526 02:11:13 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:10:25.526 02:11:13 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:10:25.526 02:11:13 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:10:25.526 02:11:13 -- spdk/autotest.sh@72 -- # hash lcov 00:10:25.526 02:11:13 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:10:25.526 02:11:13 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:10:25.526 --rc lcov_branch_coverage=1 00:10:25.526 --rc lcov_function_coverage=1 00:10:25.526 --rc genhtml_branch_coverage=1 00:10:25.526 --rc genhtml_function_coverage=1 00:10:25.526 --rc genhtml_legend=1 00:10:25.526 --rc geninfo_all_blocks=1 00:10:25.526 ' 00:10:25.526 02:11:13 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:10:25.526 --rc lcov_branch_coverage=1 00:10:25.526 --rc lcov_function_coverage=1 00:10:25.526 --rc genhtml_branch_coverage=1 00:10:25.526 --rc genhtml_function_coverage=1 00:10:25.526 --rc genhtml_legend=1 00:10:25.526 --rc geninfo_all_blocks=1 00:10:25.526 ' 00:10:25.526 02:11:13 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:10:25.526 --rc lcov_branch_coverage=1 00:10:25.526 --rc lcov_function_coverage=1 00:10:25.526 --rc genhtml_branch_coverage=1 00:10:25.526 --rc genhtml_function_coverage=1 00:10:25.526 --rc genhtml_legend=1 00:10:25.526 --rc geninfo_all_blocks=1 00:10:25.526 --no-external' 00:10:25.526 02:11:13 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:10:25.526 --rc lcov_branch_coverage=1 00:10:25.526 --rc lcov_function_coverage=1 00:10:25.526 --rc genhtml_branch_coverage=1 00:10:25.526 --rc genhtml_function_coverage=1 00:10:25.526 --rc genhtml_legend=1 00:10:25.526 --rc geninfo_all_blocks=1 00:10:25.526 --no-external' 00:10:25.526 02:11:13 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:10:25.526 lcov: LCOV version 1.14 00:10:25.526 02:11:13 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:33.637 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:10:33.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:10:33.895 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:10:33.895 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:10:33.895 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:10:33.895 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:10:42.045 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:42.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:10:54.239 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:10:54.239 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:10:54.239 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:10:54.239 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:10:54.239 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:10:54.239 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:10:54.239 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:10:54.239 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:10:54.239 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:10:54.239 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:10:54.239 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:10:54.239 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:10:54.239 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:10:54.239 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:10:54.239 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:10:54.239 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:10:54.239 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:10:54.239 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:10:54.240 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:10:54.240 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:10:54.240 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:10:54.240 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:10:54.240 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:10:54.240 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:10:54.240 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:10:54.240 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:10:54.240 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:10:54.240 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:10:54.240 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:10:54.240 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:10:54.499 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:10:54.499 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:10:54.500 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:10:54.500 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:10:54.500 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:10:54.500 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:10:54.500 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:10:54.500 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:10:54.500 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:10:54.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:10:54.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:10:58.954 02:11:46 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:10:58.954 02:11:46 -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:58.954 02:11:46 -- common/autotest_common.sh@10 -- # set +x 00:10:58.954 02:11:46 -- spdk/autotest.sh@91 -- # rm -f 00:10:58.954 02:11:46 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:59.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:59.213 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:10:59.213 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:10:59.213 02:11:47 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:10:59.213 02:11:47 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:10:59.213 02:11:47 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:10:59.213 02:11:47 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:10:59.213 02:11:47 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:10:59.213 02:11:47 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:10:59.213 02:11:47 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:10:59.213 02:11:47 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:59.213 02:11:47 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:10:59.213 02:11:47 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:10:59.213 02:11:47 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:10:59.213 02:11:47 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:10:59.213 02:11:47 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:59.213 02:11:47 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:10:59.213 02:11:47 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:10:59.213 02:11:47 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:10:59.213 02:11:47 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:10:59.213 02:11:47 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:59.213 02:11:47 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:10:59.213 02:11:47 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:10:59.213 02:11:47 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:10:59.213 02:11:47 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:10:59.213 02:11:47 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:59.213 02:11:47 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:10:59.213 02:11:47 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:10:59.213 02:11:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:59.213 02:11:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:59.213 02:11:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:10:59.213 02:11:47 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:10:59.213 02:11:47 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:10:59.213 No valid GPT data, bailing 00:10:59.213 02:11:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:59.213 02:11:47 -- scripts/common.sh@391 -- # pt= 00:10:59.213 02:11:47 -- scripts/common.sh@392 -- # return 1 00:10:59.213 02:11:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:10:59.213 1+0 records in 00:10:59.213 1+0 records out 00:10:59.213 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00376885 s, 278 MB/s 00:10:59.213 02:11:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:59.213 02:11:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:59.213 02:11:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:10:59.213 02:11:47 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:10:59.213 02:11:47 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:10:59.472 No valid GPT data, bailing 00:10:59.472 02:11:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:59.472 02:11:47 -- scripts/common.sh@391 -- # pt= 00:10:59.472 02:11:47 -- scripts/common.sh@392 -- # return 1 00:10:59.472 02:11:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:10:59.472 1+0 records in 00:10:59.472 1+0 records out 00:10:59.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0031943 s, 328 MB/s 00:10:59.472 02:11:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:59.472 02:11:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:59.472 02:11:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:10:59.472 02:11:47 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:10:59.472 02:11:47 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:10:59.473 No valid GPT data, bailing 00:10:59.473 02:11:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:10:59.473 02:11:47 -- scripts/common.sh@391 -- # pt= 00:10:59.473 02:11:47 -- scripts/common.sh@392 -- # return 1 00:10:59.473 02:11:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:10:59.473 1+0 records in 00:10:59.473 1+0 records out 00:10:59.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00389779 s, 269 MB/s 00:10:59.473 02:11:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:59.473 02:11:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:59.473 02:11:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:10:59.473 02:11:47 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:10:59.473 02:11:47 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:10:59.473 No valid GPT data, bailing 00:10:59.473 02:11:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:10:59.473 02:11:47 -- scripts/common.sh@391 -- # pt= 00:10:59.473 02:11:47 -- scripts/common.sh@392 -- # return 1 00:10:59.473 02:11:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:10:59.473 1+0 records in 00:10:59.473 1+0 records out 00:10:59.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00350442 s, 299 MB/s 00:10:59.473 02:11:47 -- spdk/autotest.sh@118 -- # sync 00:10:59.473 02:11:47 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:10:59.473 02:11:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:10:59.473 02:11:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:11:01.390 02:11:48 -- spdk/autotest.sh@124 -- # uname -s 00:11:01.390 02:11:48 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:11:01.390 02:11:48 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:11:01.390 02:11:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:01.390 02:11:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:01.390 02:11:48 -- common/autotest_common.sh@10 -- # set +x 00:11:01.390 ************************************ 00:11:01.390 START TEST setup.sh 00:11:01.390 ************************************ 00:11:01.390 02:11:48 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:11:01.390 * Looking for test storage... 00:11:01.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:01.390 02:11:49 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:11:01.390 02:11:49 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:11:01.390 02:11:49 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:11:01.390 02:11:49 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:01.390 02:11:49 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:01.390 02:11:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:01.390 ************************************ 00:11:01.390 START TEST acl 00:11:01.390 ************************************ 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:11:01.390 * Looking for test storage... 00:11:01.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:01.390 02:11:49 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:01.390 02:11:49 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:01.390 02:11:49 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:11:01.390 02:11:49 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:11:01.390 02:11:49 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:11:01.390 02:11:49 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:11:01.390 02:11:49 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:11:01.390 02:11:49 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:01.390 02:11:49 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:01.968 02:11:49 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:11:01.968 02:11:49 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:11:01.968 02:11:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:01.968 02:11:49 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:11:01.968 02:11:49 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:11:01.968 02:11:49 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.534 Hugepages 00:11:02.534 node hugesize free / total 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.534 00:11:02.534 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:11:02.534 02:11:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.792 02:11:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:11:02.792 02:11:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:11:02.792 02:11:50 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:02.792 02:11:50 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:11:02.792 02:11:50 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:11:02.792 02:11:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.792 02:11:50 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:11:02.792 02:11:50 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:11:02.792 02:11:50 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:02.792 02:11:50 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:02.792 02:11:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:02.792 ************************************ 00:11:02.792 START TEST denied 00:11:02.792 ************************************ 00:11:02.792 02:11:50 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:11:02.792 02:11:50 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:11:02.792 02:11:50 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:11:02.792 02:11:50 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:11:02.792 02:11:50 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:11:02.792 02:11:50 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:03.725 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:11:03.725 02:11:51 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:11:03.725 02:11:51 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:11:03.725 02:11:51 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:11:03.725 02:11:51 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:11:03.725 02:11:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:11:03.725 02:11:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:11:03.725 02:11:51 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:11:03.725 02:11:51 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:11:03.725 02:11:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:03.725 02:11:51 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:03.983 00:11:03.983 real 0m1.280s 00:11:03.983 user 0m0.546s 00:11:03.983 sys 0m0.683s 00:11:03.983 ************************************ 00:11:03.983 END TEST denied 00:11:03.983 ************************************ 00:11:03.983 02:11:51 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:03.983 02:11:51 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:11:03.983 02:11:51 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:11:03.983 02:11:51 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:03.983 02:11:51 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:03.983 02:11:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:03.983 ************************************ 00:11:03.983 START TEST allowed 00:11:03.983 ************************************ 00:11:03.983 02:11:51 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:11:03.983 02:11:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:11:03.983 02:11:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:11:03.983 02:11:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:11:03.983 02:11:51 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:11:03.983 02:11:51 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:04.917 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:04.917 02:11:52 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:11:04.917 02:11:52 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:11:04.917 02:11:52 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:11:04.917 02:11:52 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:11:04.917 02:11:52 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:11:04.917 02:11:52 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:11:04.917 02:11:52 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:11:04.917 02:11:52 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:11:04.918 02:11:52 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:04.918 02:11:52 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:05.487 00:11:05.487 real 0m1.426s 00:11:05.487 user 0m0.663s 00:11:05.487 sys 0m0.754s 00:11:05.487 02:11:53 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:05.487 ************************************ 00:11:05.487 END TEST allowed 00:11:05.487 ************************************ 00:11:05.487 02:11:53 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:11:05.487 ************************************ 00:11:05.487 END TEST acl 00:11:05.487 ************************************ 00:11:05.487 00:11:05.487 real 0m4.316s 00:11:05.487 user 0m1.974s 00:11:05.487 sys 0m2.291s 00:11:05.487 02:11:53 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:05.487 02:11:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:05.487 02:11:53 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:11:05.487 02:11:53 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:05.487 02:11:53 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:05.487 02:11:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:05.487 ************************************ 00:11:05.487 START TEST hugepages 00:11:05.487 ************************************ 00:11:05.487 02:11:53 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:11:05.487 * Looking for test storage... 00:11:05.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5456660 kB' 'MemAvailable: 7387496 kB' 'Buffers: 2436 kB' 'Cached: 2140624 kB' 'SwapCached: 0 kB' 'Active: 873736 kB' 'Inactive: 1373368 kB' 'Active(anon): 114532 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 105676 kB' 'Mapped: 48648 kB' 'Shmem: 10488 kB' 'KReclaimable: 70404 kB' 'Slab: 144292 kB' 'SReclaimable: 70404 kB' 'SUnreclaim: 73888 kB' 'KernelStack: 6316 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 337056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.746 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.747 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:11:05.748 02:11:53 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:11:05.748 02:11:53 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:05.748 02:11:53 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:05.748 02:11:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:05.748 ************************************ 00:11:05.748 START TEST default_setup 00:11:05.748 ************************************ 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:05.748 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:11:05.749 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:11:05.749 02:11:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:11:05.749 02:11:53 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:11:05.749 02:11:53 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:06.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:06.315 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:06.578 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.578 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7590992 kB' 'MemAvailable: 9521616 kB' 'Buffers: 2436 kB' 'Cached: 2140616 kB' 'SwapCached: 0 kB' 'Active: 890304 kB' 'Inactive: 1373376 kB' 'Active(anon): 131100 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 122092 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 143912 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 73944 kB' 'KernelStack: 6288 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.579 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7590404 kB' 'MemAvailable: 9521028 kB' 'Buffers: 2436 kB' 'Cached: 2140616 kB' 'SwapCached: 0 kB' 'Active: 890404 kB' 'Inactive: 1373376 kB' 'Active(anon): 131200 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 122364 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 143916 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 73948 kB' 'KernelStack: 6272 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.580 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.581 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7590656 kB' 'MemAvailable: 9521284 kB' 'Buffers: 2436 kB' 'Cached: 2140616 kB' 'SwapCached: 0 kB' 'Active: 890304 kB' 'Inactive: 1373380 kB' 'Active(anon): 131100 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122256 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 143916 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 73948 kB' 'KernelStack: 6304 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.582 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.583 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:11:06.584 nr_hugepages=1024 00:11:06.584 resv_hugepages=0 00:11:06.584 surplus_hugepages=0 00:11:06.584 anon_hugepages=0 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7590656 kB' 'MemAvailable: 9521284 kB' 'Buffers: 2436 kB' 'Cached: 2140616 kB' 'SwapCached: 0 kB' 'Active: 890492 kB' 'Inactive: 1373380 kB' 'Active(anon): 131288 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122132 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 143916 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 73948 kB' 'KernelStack: 6272 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.584 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.585 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7591244 kB' 'MemUsed: 4650736 kB' 'SwapCached: 0 kB' 'Active: 890284 kB' 'Inactive: 1373380 kB' 'Active(anon): 131080 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2143052 kB' 'Mapped: 48608 kB' 'AnonPages: 122264 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69968 kB' 'Slab: 143916 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 73948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.586 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:06.587 node0=1024 expecting 1024 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:06.587 00:11:06.587 real 0m0.977s 00:11:06.587 user 0m0.473s 00:11:06.587 sys 0m0.437s 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:06.587 02:11:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:11:06.587 ************************************ 00:11:06.587 END TEST default_setup 00:11:06.587 ************************************ 00:11:06.846 02:11:54 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:11:06.846 02:11:54 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:06.846 02:11:54 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:06.846 02:11:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:06.846 ************************************ 00:11:06.846 START TEST per_node_1G_alloc 00:11:06.846 ************************************ 00:11:06.846 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:11:06.846 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:11:06.846 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:06.847 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:07.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:07.110 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:07.110 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8640416 kB' 'MemAvailable: 10571048 kB' 'Buffers: 2436 kB' 'Cached: 2140616 kB' 'SwapCached: 0 kB' 'Active: 890768 kB' 'Inactive: 1373384 kB' 'Active(anon): 131564 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373384 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122716 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 143960 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 73992 kB' 'KernelStack: 6340 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.110 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8640740 kB' 'MemAvailable: 10571372 kB' 'Buffers: 2436 kB' 'Cached: 2140616 kB' 'SwapCached: 0 kB' 'Active: 890356 kB' 'Inactive: 1373384 kB' 'Active(anon): 131152 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373384 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122248 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 143968 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 74000 kB' 'KernelStack: 6296 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.111 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.112 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8640740 kB' 'MemAvailable: 10571372 kB' 'Buffers: 2436 kB' 'Cached: 2140616 kB' 'SwapCached: 0 kB' 'Active: 890244 kB' 'Inactive: 1373384 kB' 'Active(anon): 131040 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373384 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122144 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 143972 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 74004 kB' 'KernelStack: 6288 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.113 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.114 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:07.115 nr_hugepages=512 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:11:07.115 resv_hugepages=0 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:07.115 surplus_hugepages=0 00:11:07.115 anon_hugepages=0 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8640740 kB' 'MemAvailable: 10571372 kB' 'Buffers: 2436 kB' 'Cached: 2140616 kB' 'SwapCached: 0 kB' 'Active: 890540 kB' 'Inactive: 1373384 kB' 'Active(anon): 131336 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373384 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122444 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 143972 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 74004 kB' 'KernelStack: 6304 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.115 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.116 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8640740 kB' 'MemUsed: 3601240 kB' 'SwapCached: 0 kB' 'Active: 890080 kB' 'Inactive: 1373384 kB' 'Active(anon): 130876 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373384 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 2143052 kB' 'Mapped: 48612 kB' 'AnonPages: 121988 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69968 kB' 'Slab: 143976 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 74008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.117 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.118 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:07.119 node0=512 expecting 512 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:11:07.119 00:11:07.119 real 0m0.454s 00:11:07.119 user 0m0.234s 00:11:07.119 sys 0m0.246s 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:07.119 ************************************ 00:11:07.119 END TEST per_node_1G_alloc 00:11:07.119 ************************************ 00:11:07.119 02:11:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:07.119 02:11:55 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:11:07.119 02:11:55 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:07.119 02:11:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:07.119 02:11:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:07.119 ************************************ 00:11:07.119 START TEST even_2G_alloc 00:11:07.119 ************************************ 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:07.119 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:07.120 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:11:07.120 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:07.120 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:07.120 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:07.120 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:11:07.120 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:11:07.120 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:11:07.120 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:07.120 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:07.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:07.378 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:07.378 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7594020 kB' 'MemAvailable: 9524652 kB' 'Buffers: 2436 kB' 'Cached: 2140616 kB' 'SwapCached: 0 kB' 'Active: 890816 kB' 'Inactive: 1373384 kB' 'Active(anon): 131612 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373384 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122840 kB' 'Mapped: 48896 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 144064 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 74096 kB' 'KernelStack: 6356 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7593768 kB' 'MemAvailable: 9524404 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890272 kB' 'Inactive: 1373388 kB' 'Active(anon): 131068 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122168 kB' 'Mapped: 48768 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144060 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74096 kB' 'KernelStack: 6368 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.638 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7593516 kB' 'MemAvailable: 9524152 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890264 kB' 'Inactive: 1373388 kB' 'Active(anon): 131060 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122164 kB' 'Mapped: 48768 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144060 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74096 kB' 'KernelStack: 6352 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.639 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:07.640 nr_hugepages=1024 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:07.640 resv_hugepages=0 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:07.640 surplus_hugepages=0 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:07.640 anon_hugepages=0 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.640 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7593516 kB' 'MemAvailable: 9524152 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890332 kB' 'Inactive: 1373388 kB' 'Active(anon): 131128 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122236 kB' 'Mapped: 48768 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144060 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74096 kB' 'KernelStack: 6352 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7593264 kB' 'MemUsed: 4648716 kB' 'SwapCached: 0 kB' 'Active: 890368 kB' 'Inactive: 1373388 kB' 'Active(anon): 131164 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 2143056 kB' 'Mapped: 48612 kB' 'AnonPages: 122272 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69964 kB' 'Slab: 144052 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.641 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:07.642 node0=1024 expecting 1024 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:07.642 00:11:07.642 real 0m0.487s 00:11:07.642 user 0m0.271s 00:11:07.642 sys 0m0.247s 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:07.642 02:11:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:07.642 ************************************ 00:11:07.642 END TEST even_2G_alloc 00:11:07.642 ************************************ 00:11:07.642 02:11:55 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:11:07.642 02:11:55 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:07.642 02:11:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:07.642 02:11:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:07.642 ************************************ 00:11:07.642 START TEST odd_alloc 00:11:07.642 ************************************ 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:07.642 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:07.900 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:07.900 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:07.900 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7591616 kB' 'MemAvailable: 9522252 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890436 kB' 'Inactive: 1373388 kB' 'Active(anon): 131232 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122304 kB' 'Mapped: 48852 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144048 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74084 kB' 'KernelStack: 6256 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.165 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:08.166 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7591112 kB' 'MemAvailable: 9521748 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890348 kB' 'Inactive: 1373388 kB' 'Active(anon): 131144 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122300 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144052 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74088 kB' 'KernelStack: 6304 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.167 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7591112 kB' 'MemAvailable: 9521748 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890120 kB' 'Inactive: 1373388 kB' 'Active(anon): 130916 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122104 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144048 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74084 kB' 'KernelStack: 6304 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.168 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.169 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:08.170 nr_hugepages=1025 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:11:08.170 resv_hugepages=0 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:08.170 surplus_hugepages=0 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:08.170 anon_hugepages=0 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7591112 kB' 'MemAvailable: 9521748 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890100 kB' 'Inactive: 1373388 kB' 'Active(anon): 130896 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122016 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144048 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74084 kB' 'KernelStack: 6288 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.170 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.171 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7591716 kB' 'MemUsed: 4650264 kB' 'SwapCached: 0 kB' 'Active: 890356 kB' 'Inactive: 1373388 kB' 'Active(anon): 131152 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 2143056 kB' 'Mapped: 48612 kB' 'AnonPages: 122280 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69964 kB' 'Slab: 144048 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.172 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:08.173 node0=1025 expecting 1025 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:11:08.173 00:11:08.173 real 0m0.461s 00:11:08.173 user 0m0.257s 00:11:08.173 sys 0m0.236s 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:08.173 02:11:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:08.173 ************************************ 00:11:08.173 END TEST odd_alloc 00:11:08.173 ************************************ 00:11:08.173 02:11:56 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:11:08.173 02:11:56 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:08.173 02:11:56 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:08.173 02:11:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:08.173 ************************************ 00:11:08.173 START TEST custom_alloc 00:11:08.173 ************************************ 00:11:08.173 02:11:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:11:08.173 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:11:08.173 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:11:08.173 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:11:08.173 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:11:08.173 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:08.174 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:08.433 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:08.433 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:08.433 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8642984 kB' 'MemAvailable: 10573620 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890608 kB' 'Inactive: 1373388 kB' 'Active(anon): 131404 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122820 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144064 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74100 kB' 'KernelStack: 6272 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.433 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.698 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8642732 kB' 'MemAvailable: 10573368 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890044 kB' 'Inactive: 1373388 kB' 'Active(anon): 130840 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121972 kB' 'Mapped: 48676 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144084 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74120 kB' 'KernelStack: 6308 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.699 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.700 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8642732 kB' 'MemAvailable: 10573368 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890064 kB' 'Inactive: 1373388 kB' 'Active(anon): 130860 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122028 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144080 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74116 kB' 'KernelStack: 6304 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.701 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.702 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:08.703 nr_hugepages=512 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:11:08.703 resv_hugepages=0 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:08.703 surplus_hugepages=0 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:08.703 anon_hugepages=0 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8642732 kB' 'MemAvailable: 10573368 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890108 kB' 'Inactive: 1373388 kB' 'Active(anon): 130904 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122332 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144080 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74116 kB' 'KernelStack: 6304 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.703 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.704 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8642732 kB' 'MemUsed: 3599248 kB' 'SwapCached: 0 kB' 'Active: 890032 kB' 'Inactive: 1373388 kB' 'Active(anon): 130828 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2143056 kB' 'Mapped: 48612 kB' 'AnonPages: 122276 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69964 kB' 'Slab: 144080 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.705 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:08.706 node0=512 expecting 512 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:11:08.706 00:11:08.706 real 0m0.491s 00:11:08.706 user 0m0.269s 00:11:08.706 sys 0m0.250s 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:08.706 02:11:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:08.706 ************************************ 00:11:08.706 END TEST custom_alloc 00:11:08.706 ************************************ 00:11:08.706 02:11:56 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:11:08.706 02:11:56 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:08.706 02:11:56 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:08.706 02:11:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:08.706 ************************************ 00:11:08.706 START TEST no_shrink_alloc 00:11:08.706 ************************************ 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:08.706 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:08.965 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:08.965 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:08.965 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.965 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7589672 kB' 'MemAvailable: 9520308 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890260 kB' 'Inactive: 1373388 kB' 'Active(anon): 131056 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122388 kB' 'Mapped: 48888 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144092 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74128 kB' 'KernelStack: 6272 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:08.966 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.230 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7589672 kB' 'MemAvailable: 9520308 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890356 kB' 'Inactive: 1373388 kB' 'Active(anon): 131152 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122276 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144088 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74124 kB' 'KernelStack: 6304 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.231 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.232 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7589672 kB' 'MemAvailable: 9520308 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890352 kB' 'Inactive: 1373388 kB' 'Active(anon): 131148 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122280 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144088 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74124 kB' 'KernelStack: 6304 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.233 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.234 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:09.235 nr_hugepages=1024 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:09.235 resv_hugepages=0 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:09.235 surplus_hugepages=0 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:09.235 anon_hugepages=0 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7589672 kB' 'MemAvailable: 9520308 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890308 kB' 'Inactive: 1373388 kB' 'Active(anon): 131104 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122248 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144088 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74124 kB' 'KernelStack: 6288 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.235 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.236 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7589672 kB' 'MemUsed: 4652308 kB' 'SwapCached: 0 kB' 'Active: 890308 kB' 'Inactive: 1373388 kB' 'Active(anon): 131104 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 2143056 kB' 'Mapped: 48612 kB' 'AnonPages: 122212 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69964 kB' 'Slab: 144088 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.237 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:09.238 node0=1024 expecting 1024 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:09.238 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:09.540 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:09.540 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:09.540 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:09.540 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7585716 kB' 'MemAvailable: 9516352 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890900 kB' 'Inactive: 1373388 kB' 'Active(anon): 131696 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122840 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144048 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74084 kB' 'KernelStack: 6308 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.540 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7585716 kB' 'MemAvailable: 9516352 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890492 kB' 'Inactive: 1373388 kB' 'Active(anon): 131288 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122360 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144052 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74088 kB' 'KernelStack: 6304 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.541 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.542 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7585716 kB' 'MemAvailable: 9516352 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890264 kB' 'Inactive: 1373388 kB' 'Active(anon): 131060 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122200 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144056 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74092 kB' 'KernelStack: 6288 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.543 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.544 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:09.545 nr_hugepages=1024 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:09.545 resv_hugepages=0 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:09.545 surplus_hugepages=0 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:09.545 anon_hugepages=0 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7585716 kB' 'MemAvailable: 9516352 kB' 'Buffers: 2436 kB' 'Cached: 2140620 kB' 'SwapCached: 0 kB' 'Active: 890236 kB' 'Inactive: 1373388 kB' 'Active(anon): 131032 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122200 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 69964 kB' 'Slab: 144056 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74092 kB' 'KernelStack: 6288 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.545 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.546 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.806 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.806 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.806 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.806 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.806 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.806 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.806 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.806 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.806 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.806 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.806 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.806 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.806 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:09.807 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7585716 kB' 'MemUsed: 4656264 kB' 'SwapCached: 0 kB' 'Active: 890236 kB' 'Inactive: 1373388 kB' 'Active(anon): 131032 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1373388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 2143056 kB' 'Mapped: 48612 kB' 'AnonPages: 122200 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69964 kB' 'Slab: 144056 kB' 'SReclaimable: 69964 kB' 'SUnreclaim: 74092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.808 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:09.809 node0=1024 expecting 1024 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:09.809 ************************************ 00:11:09.809 END TEST no_shrink_alloc 00:11:09.809 ************************************ 00:11:09.809 00:11:09.809 real 0m0.958s 00:11:09.809 user 0m0.530s 00:11:09.809 sys 0m0.452s 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:09.809 02:11:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:09.809 02:11:57 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:11:09.809 02:11:57 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:11:09.809 02:11:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:09.809 02:11:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:09.809 02:11:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:09.809 02:11:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:09.809 02:11:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:09.809 02:11:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:11:09.809 02:11:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:11:09.809 00:11:09.809 real 0m4.216s 00:11:09.809 user 0m2.193s 00:11:09.809 sys 0m2.077s 00:11:09.809 02:11:57 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:09.809 ************************************ 00:11:09.809 END TEST hugepages 00:11:09.809 ************************************ 00:11:09.809 02:11:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:09.809 02:11:57 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:11:09.809 02:11:57 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:09.809 02:11:57 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:09.809 02:11:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:09.809 ************************************ 00:11:09.809 START TEST driver 00:11:09.809 ************************************ 00:11:09.809 02:11:57 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:11:09.809 * Looking for test storage... 00:11:09.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:09.809 02:11:57 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:11:09.809 02:11:57 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:09.809 02:11:57 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:10.376 02:11:58 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:11:10.376 02:11:58 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:10.376 02:11:58 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:10.376 02:11:58 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:11:10.376 ************************************ 00:11:10.376 START TEST guess_driver 00:11:10.376 ************************************ 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:11:10.376 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:11:10.376 Looking for driver=uio_pci_generic 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:11:10.376 02:11:58 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:11.315 02:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:11:11.315 02:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:11:11.315 02:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:11.315 02:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:11.315 02:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:11:11.315 02:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:11.315 02:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:11.315 02:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:11:11.315 02:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:11.315 02:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:11:11.315 02:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:11:11.315 02:11:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:11.315 02:11:59 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:11.881 ************************************ 00:11:11.881 END TEST guess_driver 00:11:11.881 ************************************ 00:11:11.881 00:11:11.881 real 0m1.485s 00:11:11.881 user 0m0.614s 00:11:11.881 sys 0m0.874s 00:11:11.881 02:11:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:11.881 02:11:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:11:11.881 ************************************ 00:11:11.881 END TEST driver 00:11:11.881 ************************************ 00:11:11.881 00:11:11.881 real 0m2.142s 00:11:11.881 user 0m0.824s 00:11:11.881 sys 0m1.353s 00:11:11.881 02:11:59 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:11.881 02:11:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:11:11.881 02:11:59 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:11:11.881 02:11:59 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:11.882 02:11:59 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:11.882 02:11:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:11.882 ************************************ 00:11:11.882 START TEST devices 00:11:11.882 ************************************ 00:11:11.882 02:11:59 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:11:12.140 * Looking for test storage... 00:11:12.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:12.140 02:11:59 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:11:12.140 02:11:59 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:11:12.140 02:11:59 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:12.140 02:11:59 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:12.706 02:12:00 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:12.706 02:12:00 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:12.706 02:12:00 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:11:12.706 02:12:00 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:11:12.706 02:12:00 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:11:12.706 02:12:00 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:11:12.706 02:12:00 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:11:12.706 02:12:00 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:12.706 02:12:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:11:12.706 02:12:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:11:12.706 02:12:00 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:11:12.707 02:12:00 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:12.707 02:12:00 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:11:12.707 02:12:00 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:11:12.707 02:12:00 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:11:12.707 No valid GPT data, bailing 00:11:12.707 02:12:00 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:12.707 02:12:00 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:11:12.707 02:12:00 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:11:12.707 02:12:00 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:11:12.707 02:12:00 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:12.707 02:12:00 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:12.707 02:12:00 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:11:12.707 02:12:00 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:11:12.707 02:12:00 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:12.707 02:12:00 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:11:12.707 02:12:00 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:12.707 02:12:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:11:12.707 02:12:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:11:12.707 02:12:00 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:11:12.707 02:12:00 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:12.707 02:12:00 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:11:12.707 02:12:00 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:11:12.707 02:12:00 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:11:12.965 No valid GPT data, bailing 00:11:12.965 02:12:00 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:11:12.965 02:12:00 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:11:12.965 02:12:00 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:11:12.965 02:12:00 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:11:12.965 02:12:00 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:11:12.965 02:12:00 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:11:12.965 02:12:00 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:11:12.965 02:12:00 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:11:12.965 No valid GPT data, bailing 00:11:12.965 02:12:00 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:11:12.965 02:12:00 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:11:12.965 02:12:00 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:11:12.965 02:12:00 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:11:12.965 02:12:00 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:11:12.965 02:12:00 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:11:12.965 02:12:00 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:11:12.965 02:12:00 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:11:12.966 02:12:00 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:11:12.966 No valid GPT data, bailing 00:11:12.966 02:12:00 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:11:12.966 02:12:00 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:11:12.966 02:12:00 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:11:12.966 02:12:00 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:11:12.966 02:12:00 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:11:12.966 02:12:00 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:11:12.966 02:12:00 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:11:12.966 02:12:00 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:11:12.966 02:12:00 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:12.966 02:12:00 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:11:12.966 02:12:00 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:11:12.966 02:12:00 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:11:12.966 02:12:00 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:11:12.966 02:12:00 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:12.966 02:12:00 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:12.966 02:12:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:12.966 ************************************ 00:11:12.966 START TEST nvme_mount 00:11:12.966 ************************************ 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:11:12.966 02:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:11:14.370 Creating new GPT entries in memory. 00:11:14.370 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:14.370 other utilities. 00:11:14.370 02:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:11:14.370 02:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:14.370 02:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:14.370 02:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:14.370 02:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:11:15.307 Creating new GPT entries in memory. 00:11:15.307 The operation has completed successfully. 00:11:15.307 02:12:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:15.307 02:12:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:15.307 02:12:02 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58185 00:11:15.307 02:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:15.307 02:12:02 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:11:15.307 02:12:02 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:15.307 02:12:02 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:11:15.307 02:12:02 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:11:15.307 02:12:02 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:15.307 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.567 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:15.567 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.567 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:15.567 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:11:15.567 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:15.567 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:15.567 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:15.567 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:11:15.567 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:15.567 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:15.567 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:15.567 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:15.567 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:15.567 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:15.567 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:15.825 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:15.825 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:15.825 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:15.825 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:15.825 02:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:16.084 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:16.084 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:11:16.084 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:16.084 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:16.084 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:16.084 02:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:16.084 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:16.084 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:16.341 02:12:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:16.599 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:16.599 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:11:16.599 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:16.599 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:16.599 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:16.599 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:16.857 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:16.857 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:16.857 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:16.857 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:16.857 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:16.857 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:16.857 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:11:16.857 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:11:16.857 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:16.857 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:16.857 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:16.857 02:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:16.857 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:16.857 00:11:16.857 real 0m3.869s 00:11:16.857 user 0m0.630s 00:11:16.857 sys 0m0.891s 00:11:16.857 02:12:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:16.857 02:12:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:11:16.857 ************************************ 00:11:16.857 END TEST nvme_mount 00:11:16.857 ************************************ 00:11:16.857 02:12:04 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:11:16.857 02:12:04 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:16.857 02:12:04 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:16.857 02:12:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:16.857 ************************************ 00:11:16.857 START TEST dm_mount 00:11:16.857 ************************************ 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:11:16.857 02:12:04 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:11:18.303 Creating new GPT entries in memory. 00:11:18.303 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:18.303 other utilities. 00:11:18.303 02:12:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:11:18.303 02:12:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:18.303 02:12:05 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:18.303 02:12:05 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:18.303 02:12:05 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:11:19.235 Creating new GPT entries in memory. 00:11:19.235 The operation has completed successfully. 00:11:19.235 02:12:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:19.235 02:12:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:19.235 02:12:06 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:19.235 02:12:06 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:19.235 02:12:06 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:11:20.167 The operation has completed successfully. 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 58613 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:20.167 02:12:07 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:20.167 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:20.167 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:11:20.167 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:11:20.167 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:20.167 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:20.167 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:20.425 02:12:08 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:20.683 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:20.683 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:11:20.683 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:11:20.683 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:20.683 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:20.683 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:20.941 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:20.941 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:20.941 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:20.941 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:20.941 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:20.941 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:20.941 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:11:20.941 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:11:20.941 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:20.941 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:20.941 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:11:20.941 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:20.941 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:11:20.941 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:20.941 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:20.941 02:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:11:20.941 00:11:20.942 real 0m4.042s 00:11:20.942 user 0m0.407s 00:11:20.942 sys 0m0.605s 00:11:20.942 02:12:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:20.942 02:12:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:11:20.942 ************************************ 00:11:20.942 END TEST dm_mount 00:11:20.942 ************************************ 00:11:20.942 02:12:08 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:11:20.942 02:12:08 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:11:20.942 02:12:08 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:20.942 02:12:08 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:20.942 02:12:08 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:20.942 02:12:08 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:20.942 02:12:08 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:21.199 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:21.199 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:21.199 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:21.199 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:21.199 02:12:09 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:11:21.199 02:12:09 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:21.199 02:12:09 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:21.199 02:12:09 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:21.199 02:12:09 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:21.199 02:12:09 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:11:21.199 02:12:09 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:11:21.199 00:11:21.199 real 0m9.327s 00:11:21.199 user 0m1.652s 00:11:21.199 sys 0m2.018s 00:11:21.199 02:12:09 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:21.199 02:12:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:21.199 ************************************ 00:11:21.199 END TEST devices 00:11:21.199 ************************************ 00:11:21.457 00:11:21.457 real 0m20.244s 00:11:21.457 user 0m6.737s 00:11:21.457 sys 0m7.882s 00:11:21.457 02:12:09 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:21.457 ************************************ 00:11:21.457 END TEST setup.sh 00:11:21.457 02:12:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:21.457 ************************************ 00:11:21.457 02:12:09 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:22.023 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:22.023 Hugepages 00:11:22.023 node hugesize free / total 00:11:22.023 node0 1048576kB 0 / 0 00:11:22.023 node0 2048kB 2048 / 2048 00:11:22.023 00:11:22.023 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:22.023 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:11:22.023 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:11:22.023 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:11:22.023 02:12:09 -- spdk/autotest.sh@130 -- # uname -s 00:11:22.023 02:12:09 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:11:22.023 02:12:09 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:11:22.023 02:12:09 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:22.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:22.847 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:22.847 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:22.847 02:12:10 -- common/autotest_common.sh@1528 -- # sleep 1 00:11:24.221 02:12:11 -- common/autotest_common.sh@1529 -- # bdfs=() 00:11:24.221 02:12:11 -- common/autotest_common.sh@1529 -- # local bdfs 00:11:24.221 02:12:11 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:11:24.221 02:12:11 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:11:24.221 02:12:11 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:24.221 02:12:11 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:24.221 02:12:11 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:24.221 02:12:11 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:24.221 02:12:11 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:24.221 02:12:11 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:11:24.221 02:12:11 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:24.221 02:12:11 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:24.221 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:24.221 Waiting for block devices as requested 00:11:24.221 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:24.480 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:24.480 02:12:12 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:11:24.480 02:12:12 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:11:24.480 02:12:12 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:11:24.480 02:12:12 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:11:24.480 02:12:12 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:24.480 02:12:12 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:11:24.480 02:12:12 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:24.480 02:12:12 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:11:24.480 02:12:12 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:11:24.480 02:12:12 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:11:24.480 02:12:12 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:11:24.480 02:12:12 -- common/autotest_common.sh@1541 -- # grep oacs 00:11:24.480 02:12:12 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:11:24.480 02:12:12 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:11:24.480 02:12:12 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:11:24.480 02:12:12 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:11:24.480 02:12:12 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:11:24.480 02:12:12 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:11:24.480 02:12:12 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:11:24.480 02:12:12 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:11:24.480 02:12:12 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:11:24.480 02:12:12 -- common/autotest_common.sh@1553 -- # continue 00:11:24.480 02:12:12 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:11:24.480 02:12:12 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:11:24.480 02:12:12 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:11:24.480 02:12:12 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:11:24.480 02:12:12 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:24.480 02:12:12 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:11:24.480 02:12:12 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:24.480 02:12:12 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:11:24.480 02:12:12 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:11:24.480 02:12:12 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:11:24.480 02:12:12 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:11:24.480 02:12:12 -- common/autotest_common.sh@1541 -- # grep oacs 00:11:24.480 02:12:12 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:11:24.480 02:12:12 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:11:24.480 02:12:12 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:11:24.480 02:12:12 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:11:24.480 02:12:12 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:11:24.480 02:12:12 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:11:24.480 02:12:12 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:11:24.480 02:12:12 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:11:24.480 02:12:12 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:11:24.480 02:12:12 -- common/autotest_common.sh@1553 -- # continue 00:11:24.480 02:12:12 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:11:24.480 02:12:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.480 02:12:12 -- common/autotest_common.sh@10 -- # set +x 00:11:24.480 02:12:12 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:11:24.480 02:12:12 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:24.480 02:12:12 -- common/autotest_common.sh@10 -- # set +x 00:11:24.480 02:12:12 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:25.046 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:25.305 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.305 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.305 02:12:13 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:11:25.305 02:12:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:25.305 02:12:13 -- common/autotest_common.sh@10 -- # set +x 00:11:25.305 02:12:13 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:11:25.305 02:12:13 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:11:25.305 02:12:13 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:11:25.305 02:12:13 -- common/autotest_common.sh@1573 -- # bdfs=() 00:11:25.305 02:12:13 -- common/autotest_common.sh@1573 -- # local bdfs 00:11:25.305 02:12:13 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:11:25.305 02:12:13 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:25.305 02:12:13 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:25.305 02:12:13 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:25.305 02:12:13 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:25.305 02:12:13 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:25.568 02:12:13 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:11:25.568 02:12:13 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:25.568 02:12:13 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:11:25.568 02:12:13 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:11:25.568 02:12:13 -- common/autotest_common.sh@1576 -- # device=0x0010 00:11:25.568 02:12:13 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:25.568 02:12:13 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:11:25.568 02:12:13 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:11:25.568 02:12:13 -- common/autotest_common.sh@1576 -- # device=0x0010 00:11:25.568 02:12:13 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:25.568 02:12:13 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:11:25.568 02:12:13 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:11:25.568 02:12:13 -- common/autotest_common.sh@1589 -- # return 0 00:11:25.568 02:12:13 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:11:25.568 02:12:13 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:11:25.568 02:12:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:25.568 02:12:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:25.568 02:12:13 -- spdk/autotest.sh@162 -- # timing_enter lib 00:11:25.568 02:12:13 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:25.568 02:12:13 -- common/autotest_common.sh@10 -- # set +x 00:11:25.568 02:12:13 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:25.568 02:12:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:25.568 02:12:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:25.568 02:12:13 -- common/autotest_common.sh@10 -- # set +x 00:11:25.568 ************************************ 00:11:25.568 START TEST env 00:11:25.568 ************************************ 00:11:25.568 02:12:13 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:25.568 * Looking for test storage... 00:11:25.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:11:25.568 02:12:13 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:25.568 02:12:13 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:25.568 02:12:13 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:25.568 02:12:13 env -- common/autotest_common.sh@10 -- # set +x 00:11:25.568 ************************************ 00:11:25.568 START TEST env_memory 00:11:25.568 ************************************ 00:11:25.568 02:12:13 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:25.568 00:11:25.568 00:11:25.568 CUnit - A unit testing framework for C - Version 2.1-3 00:11:25.568 http://cunit.sourceforge.net/ 00:11:25.568 00:11:25.568 00:11:25.568 Suite: memory 00:11:25.568 Test: alloc and free memory map ...[2024-05-15 02:12:13.494709] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:11:25.568 passed 00:11:25.568 Test: mem map translation ...[2024-05-15 02:12:13.526432] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:11:25.568 [2024-05-15 02:12:13.526498] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:11:25.568 [2024-05-15 02:12:13.526559] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:11:25.569 [2024-05-15 02:12:13.526574] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:11:25.569 passed 00:11:25.828 Test: mem map registration ...[2024-05-15 02:12:13.590422] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:11:25.828 [2024-05-15 02:12:13.590495] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:11:25.828 passed 00:11:25.828 Test: mem map adjacent registrations ...passed 00:11:25.828 00:11:25.828 Run Summary: Type Total Ran Passed Failed Inactive 00:11:25.828 suites 1 1 n/a 0 0 00:11:25.828 tests 4 4 4 0 0 00:11:25.828 asserts 152 152 152 0 n/a 00:11:25.828 00:11:25.828 Elapsed time = 0.207 seconds 00:11:25.828 00:11:25.828 real 0m0.220s 00:11:25.828 user 0m0.205s 00:11:25.828 sys 0m0.014s 00:11:25.828 02:12:13 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:25.828 02:12:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:11:25.828 ************************************ 00:11:25.828 END TEST env_memory 00:11:25.828 ************************************ 00:11:25.828 02:12:13 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:25.828 02:12:13 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:25.828 02:12:13 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:25.828 02:12:13 env -- common/autotest_common.sh@10 -- # set +x 00:11:25.828 ************************************ 00:11:25.828 START TEST env_vtophys 00:11:25.828 ************************************ 00:11:25.828 02:12:13 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:25.828 EAL: lib.eal log level changed from notice to debug 00:11:25.828 EAL: Detected lcore 0 as core 0 on socket 0 00:11:25.828 EAL: Detected lcore 1 as core 0 on socket 0 00:11:25.828 EAL: Detected lcore 2 as core 0 on socket 0 00:11:25.828 EAL: Detected lcore 3 as core 0 on socket 0 00:11:25.828 EAL: Detected lcore 4 as core 0 on socket 0 00:11:25.828 EAL: Detected lcore 5 as core 0 on socket 0 00:11:25.828 EAL: Detected lcore 6 as core 0 on socket 0 00:11:25.828 EAL: Detected lcore 7 as core 0 on socket 0 00:11:25.828 EAL: Detected lcore 8 as core 0 on socket 0 00:11:25.828 EAL: Detected lcore 9 as core 0 on socket 0 00:11:25.828 EAL: Maximum logical cores by configuration: 128 00:11:25.828 EAL: Detected CPU lcores: 10 00:11:25.828 EAL: Detected NUMA nodes: 1 00:11:25.828 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:11:25.828 EAL: Detected shared linkage of DPDK 00:11:25.828 EAL: No shared files mode enabled, IPC will be disabled 00:11:25.828 EAL: Selected IOVA mode 'PA' 00:11:25.828 EAL: Probing VFIO support... 00:11:25.828 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:25.828 EAL: VFIO modules not loaded, skipping VFIO support... 00:11:25.828 EAL: Ask a virtual area of 0x2e000 bytes 00:11:25.828 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:11:25.828 EAL: Setting up physically contiguous memory... 00:11:25.828 EAL: Setting maximum number of open files to 524288 00:11:25.828 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:11:25.828 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:11:25.828 EAL: Ask a virtual area of 0x61000 bytes 00:11:25.828 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:11:25.828 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:25.828 EAL: Ask a virtual area of 0x400000000 bytes 00:11:25.828 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:11:25.828 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:11:25.828 EAL: Ask a virtual area of 0x61000 bytes 00:11:25.828 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:11:25.828 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:25.828 EAL: Ask a virtual area of 0x400000000 bytes 00:11:25.828 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:11:25.828 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:11:25.828 EAL: Ask a virtual area of 0x61000 bytes 00:11:25.828 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:11:25.828 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:25.828 EAL: Ask a virtual area of 0x400000000 bytes 00:11:25.828 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:11:25.828 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:11:25.828 EAL: Ask a virtual area of 0x61000 bytes 00:11:25.828 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:11:25.828 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:25.828 EAL: Ask a virtual area of 0x400000000 bytes 00:11:25.828 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:11:25.828 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:11:25.828 EAL: Hugepages will be freed exactly as allocated. 00:11:25.828 EAL: No shared files mode enabled, IPC is disabled 00:11:25.828 EAL: No shared files mode enabled, IPC is disabled 00:11:26.087 EAL: TSC frequency is ~2200000 KHz 00:11:26.087 EAL: Main lcore 0 is ready (tid=7f41f8611a00;cpuset=[0]) 00:11:26.087 EAL: Trying to obtain current memory policy. 00:11:26.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.087 EAL: Restoring previous memory policy: 0 00:11:26.087 EAL: request: mp_malloc_sync 00:11:26.087 EAL: No shared files mode enabled, IPC is disabled 00:11:26.087 EAL: Heap on socket 0 was expanded by 2MB 00:11:26.087 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:26.087 EAL: No PCI address specified using 'addr=' in: bus=pci 00:11:26.087 EAL: Mem event callback 'spdk:(nil)' registered 00:11:26.088 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:11:26.088 00:11:26.088 00:11:26.088 CUnit - A unit testing framework for C - Version 2.1-3 00:11:26.088 http://cunit.sourceforge.net/ 00:11:26.088 00:11:26.088 00:11:26.088 Suite: components_suite 00:11:26.088 Test: vtophys_malloc_test ...passed 00:11:26.088 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:11:26.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.088 EAL: Restoring previous memory policy: 4 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was expanded by 4MB 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was shrunk by 4MB 00:11:26.088 EAL: Trying to obtain current memory policy. 00:11:26.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.088 EAL: Restoring previous memory policy: 4 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was expanded by 6MB 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was shrunk by 6MB 00:11:26.088 EAL: Trying to obtain current memory policy. 00:11:26.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.088 EAL: Restoring previous memory policy: 4 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was expanded by 10MB 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was shrunk by 10MB 00:11:26.088 EAL: Trying to obtain current memory policy. 00:11:26.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.088 EAL: Restoring previous memory policy: 4 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was expanded by 18MB 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was shrunk by 18MB 00:11:26.088 EAL: Trying to obtain current memory policy. 00:11:26.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.088 EAL: Restoring previous memory policy: 4 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was expanded by 34MB 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was shrunk by 34MB 00:11:26.088 EAL: Trying to obtain current memory policy. 00:11:26.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.088 EAL: Restoring previous memory policy: 4 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was expanded by 66MB 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was shrunk by 66MB 00:11:26.088 EAL: Trying to obtain current memory policy. 00:11:26.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.088 EAL: Restoring previous memory policy: 4 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was expanded by 130MB 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was shrunk by 130MB 00:11:26.088 EAL: Trying to obtain current memory policy. 00:11:26.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.088 EAL: Restoring previous memory policy: 4 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was expanded by 258MB 00:11:26.088 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.088 EAL: request: mp_malloc_sync 00:11:26.088 EAL: No shared files mode enabled, IPC is disabled 00:11:26.088 EAL: Heap on socket 0 was shrunk by 258MB 00:11:26.088 EAL: Trying to obtain current memory policy. 00:11:26.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.347 EAL: Restoring previous memory policy: 4 00:11:26.347 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.347 EAL: request: mp_malloc_sync 00:11:26.347 EAL: No shared files mode enabled, IPC is disabled 00:11:26.347 EAL: Heap on socket 0 was expanded by 514MB 00:11:26.347 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.347 EAL: request: mp_malloc_sync 00:11:26.347 EAL: No shared files mode enabled, IPC is disabled 00:11:26.347 EAL: Heap on socket 0 was shrunk by 514MB 00:11:26.347 EAL: Trying to obtain current memory policy. 00:11:26.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.605 EAL: Restoring previous memory policy: 4 00:11:26.605 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.605 EAL: request: mp_malloc_sync 00:11:26.605 EAL: No shared files mode enabled, IPC is disabled 00:11:26.605 EAL: Heap on socket 0 was expanded by 1026MB 00:11:26.605 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.605 EAL: request: mp_malloc_sync 00:11:26.605 EAL: No shared files mode enabled, IPC is disabled 00:11:26.605 EAL: Heap on socket 0 was shrunk by 1026MB 00:11:26.605 passed 00:11:26.605 00:11:26.605 Run Summary: Type Total Ran Passed Failed Inactive 00:11:26.605 suites 1 1 n/a 0 0 00:11:26.605 tests 2 2 2 0 0 00:11:26.605 asserts 5358 5358 5358 0 n/a 00:11:26.605 00:11:26.605 Elapsed time = 0.688 seconds 00:11:26.605 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.605 EAL: request: mp_malloc_sync 00:11:26.605 EAL: No shared files mode enabled, IPC is disabled 00:11:26.605 EAL: Heap on socket 0 was shrunk by 2MB 00:11:26.605 EAL: No shared files mode enabled, IPC is disabled 00:11:26.605 EAL: No shared files mode enabled, IPC is disabled 00:11:26.605 EAL: No shared files mode enabled, IPC is disabled 00:11:26.605 00:11:26.605 real 0m0.881s 00:11:26.605 user 0m0.427s 00:11:26.605 sys 0m0.322s 00:11:26.605 02:12:14 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:26.605 02:12:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:11:26.605 ************************************ 00:11:26.605 END TEST env_vtophys 00:11:26.605 ************************************ 00:11:26.916 02:12:14 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:26.916 02:12:14 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:26.916 02:12:14 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:26.916 02:12:14 env -- common/autotest_common.sh@10 -- # set +x 00:11:26.916 ************************************ 00:11:26.916 START TEST env_pci 00:11:26.916 ************************************ 00:11:26.916 02:12:14 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:26.916 00:11:26.916 00:11:26.916 CUnit - A unit testing framework for C - Version 2.1-3 00:11:26.916 http://cunit.sourceforge.net/ 00:11:26.916 00:11:26.916 00:11:26.916 Suite: pci 00:11:26.916 Test: pci_hook ...[2024-05-15 02:12:14.649806] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 59789 has claimed it 00:11:26.916 passed 00:11:26.916 00:11:26.917 Run Summary: Type Total Ran Passed Failed Inactive 00:11:26.917 suites 1 1 n/a 0 0 00:11:26.917 tests 1 1 1 0 0 00:11:26.917 asserts 25 25 25 0 n/a 00:11:26.917 00:11:26.917 Elapsed time = 0.002 seconds 00:11:26.917 EAL: Cannot find device (10000:00:01.0) 00:11:26.917 EAL: Failed to attach device on primary process 00:11:26.917 00:11:26.917 real 0m0.018s 00:11:26.917 user 0m0.006s 00:11:26.917 sys 0m0.011s 00:11:26.917 02:12:14 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:26.917 02:12:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:11:26.917 ************************************ 00:11:26.917 END TEST env_pci 00:11:26.917 ************************************ 00:11:26.917 02:12:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:11:26.917 02:12:14 env -- env/env.sh@15 -- # uname 00:11:26.917 02:12:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:11:26.917 02:12:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:11:26.917 02:12:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:26.917 02:12:14 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:26.917 02:12:14 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:26.917 02:12:14 env -- common/autotest_common.sh@10 -- # set +x 00:11:26.917 ************************************ 00:11:26.917 START TEST env_dpdk_post_init 00:11:26.917 ************************************ 00:11:26.917 02:12:14 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:26.917 EAL: Detected CPU lcores: 10 00:11:26.917 EAL: Detected NUMA nodes: 1 00:11:26.917 EAL: Detected shared linkage of DPDK 00:11:26.917 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:26.917 EAL: Selected IOVA mode 'PA' 00:11:26.917 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:26.917 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:11:26.917 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:11:26.917 Starting DPDK initialization... 00:11:26.917 Starting SPDK post initialization... 00:11:26.917 SPDK NVMe probe 00:11:26.917 Attaching to 0000:00:10.0 00:11:26.917 Attaching to 0000:00:11.0 00:11:26.917 Attached to 0000:00:10.0 00:11:26.917 Attached to 0000:00:11.0 00:11:26.917 Cleaning up... 00:11:26.917 00:11:26.917 real 0m0.185s 00:11:26.917 user 0m0.049s 00:11:26.917 sys 0m0.034s 00:11:26.917 02:12:14 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:26.917 02:12:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:11:26.917 ************************************ 00:11:26.917 END TEST env_dpdk_post_init 00:11:26.917 ************************************ 00:11:27.176 02:12:14 env -- env/env.sh@26 -- # uname 00:11:27.176 02:12:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:27.176 02:12:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:27.176 02:12:14 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:27.176 02:12:14 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:27.176 02:12:14 env -- common/autotest_common.sh@10 -- # set +x 00:11:27.176 ************************************ 00:11:27.176 START TEST env_mem_callbacks 00:11:27.176 ************************************ 00:11:27.176 02:12:14 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:27.176 EAL: Detected CPU lcores: 10 00:11:27.176 EAL: Detected NUMA nodes: 1 00:11:27.176 EAL: Detected shared linkage of DPDK 00:11:27.176 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:27.176 EAL: Selected IOVA mode 'PA' 00:11:27.176 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:27.176 00:11:27.176 00:11:27.176 CUnit - A unit testing framework for C - Version 2.1-3 00:11:27.176 http://cunit.sourceforge.net/ 00:11:27.176 00:11:27.176 00:11:27.176 Suite: memory 00:11:27.176 Test: test ... 00:11:27.176 register 0x200000200000 2097152 00:11:27.176 malloc 3145728 00:11:27.176 register 0x200000400000 4194304 00:11:27.176 buf 0x200000500000 len 3145728 PASSED 00:11:27.176 malloc 64 00:11:27.176 buf 0x2000004fff40 len 64 PASSED 00:11:27.176 malloc 4194304 00:11:27.176 register 0x200000800000 6291456 00:11:27.176 buf 0x200000a00000 len 4194304 PASSED 00:11:27.176 free 0x200000500000 3145728 00:11:27.176 free 0x2000004fff40 64 00:11:27.176 unregister 0x200000400000 4194304 PASSED 00:11:27.176 free 0x200000a00000 4194304 00:11:27.176 unregister 0x200000800000 6291456 PASSED 00:11:27.176 malloc 8388608 00:11:27.176 register 0x200000400000 10485760 00:11:27.176 buf 0x200000600000 len 8388608 PASSED 00:11:27.176 free 0x200000600000 8388608 00:11:27.176 unregister 0x200000400000 10485760 PASSED 00:11:27.176 passed 00:11:27.176 00:11:27.176 Run Summary: Type Total Ran Passed Failed Inactive 00:11:27.176 suites 1 1 n/a 0 0 00:11:27.176 tests 1 1 1 0 0 00:11:27.176 asserts 15 15 15 0 n/a 00:11:27.176 00:11:27.176 Elapsed time = 0.005 seconds 00:11:27.176 00:11:27.176 real 0m0.142s 00:11:27.176 user 0m0.016s 00:11:27.176 sys 0m0.025s 00:11:27.176 02:12:15 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:27.176 02:12:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:11:27.176 ************************************ 00:11:27.176 END TEST env_mem_callbacks 00:11:27.176 ************************************ 00:11:27.176 00:11:27.176 real 0m1.731s 00:11:27.176 user 0m0.804s 00:11:27.176 sys 0m0.571s 00:11:27.176 02:12:15 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:27.177 02:12:15 env -- common/autotest_common.sh@10 -- # set +x 00:11:27.177 ************************************ 00:11:27.177 END TEST env 00:11:27.177 ************************************ 00:11:27.177 02:12:15 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:27.177 02:12:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:27.177 02:12:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:27.177 02:12:15 -- common/autotest_common.sh@10 -- # set +x 00:11:27.177 ************************************ 00:11:27.177 START TEST rpc 00:11:27.177 ************************************ 00:11:27.177 02:12:15 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:27.436 * Looking for test storage... 00:11:27.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:27.436 02:12:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59904 00:11:27.436 02:12:15 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:11:27.436 02:12:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:27.436 02:12:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59904 00:11:27.436 02:12:15 rpc -- common/autotest_common.sh@827 -- # '[' -z 59904 ']' 00:11:27.436 02:12:15 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.436 02:12:15 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:27.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.436 02:12:15 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.436 02:12:15 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:27.436 02:12:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.436 [2024-05-15 02:12:15.281561] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:27.436 [2024-05-15 02:12:15.282110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59904 ] 00:11:27.436 [2024-05-15 02:12:15.414547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.696 [2024-05-15 02:12:15.479245] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:27.696 [2024-05-15 02:12:15.479305] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59904' to capture a snapshot of events at runtime. 00:11:27.696 [2024-05-15 02:12:15.479316] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.696 [2024-05-15 02:12:15.479325] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.696 [2024-05-15 02:12:15.479332] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59904 for offline analysis/debug. 00:11:27.696 [2024-05-15 02:12:15.479371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.696 02:12:15 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:27.696 02:12:15 rpc -- common/autotest_common.sh@860 -- # return 0 00:11:27.696 02:12:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:27.696 02:12:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:27.696 02:12:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:27.696 02:12:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:27.696 02:12:15 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:27.696 02:12:15 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:27.696 02:12:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.696 ************************************ 00:11:27.696 START TEST rpc_integrity 00:11:27.696 ************************************ 00:11:27.696 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:11:27.696 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:27.696 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.696 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.696 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.696 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:27.696 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:27.954 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:27.954 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:27.954 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.954 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.954 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.954 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:27.954 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:27.954 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.954 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.954 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.954 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:27.954 { 00:11:27.954 "aliases": [ 00:11:27.954 "7bf3a2d1-b39e-4392-811e-4630f571c671" 00:11:27.954 ], 00:11:27.954 "assigned_rate_limits": { 00:11:27.954 "r_mbytes_per_sec": 0, 00:11:27.954 "rw_ios_per_sec": 0, 00:11:27.954 "rw_mbytes_per_sec": 0, 00:11:27.954 "w_mbytes_per_sec": 0 00:11:27.954 }, 00:11:27.954 "block_size": 512, 00:11:27.954 "claimed": false, 00:11:27.954 "driver_specific": {}, 00:11:27.954 "memory_domains": [ 00:11:27.955 { 00:11:27.955 "dma_device_id": "system", 00:11:27.955 "dma_device_type": 1 00:11:27.955 }, 00:11:27.955 { 00:11:27.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.955 "dma_device_type": 2 00:11:27.955 } 00:11:27.955 ], 00:11:27.955 "name": "Malloc0", 00:11:27.955 "num_blocks": 16384, 00:11:27.955 "product_name": "Malloc disk", 00:11:27.955 "supported_io_types": { 00:11:27.955 "abort": true, 00:11:27.955 "compare": false, 00:11:27.955 "compare_and_write": false, 00:11:27.955 "flush": true, 00:11:27.955 "nvme_admin": false, 00:11:27.955 "nvme_io": false, 00:11:27.955 "read": true, 00:11:27.955 "reset": true, 00:11:27.955 "unmap": true, 00:11:27.955 "write": true, 00:11:27.955 "write_zeroes": true 00:11:27.955 }, 00:11:27.955 "uuid": "7bf3a2d1-b39e-4392-811e-4630f571c671", 00:11:27.955 "zoned": false 00:11:27.955 } 00:11:27.955 ]' 00:11:27.955 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:27.955 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:27.955 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:27.955 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.955 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.955 [2024-05-15 02:12:15.817250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:27.955 [2024-05-15 02:12:15.817317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.955 [2024-05-15 02:12:15.817337] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xabee60 00:11:27.955 [2024-05-15 02:12:15.817347] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.955 [2024-05-15 02:12:15.818979] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.955 [2024-05-15 02:12:15.819016] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:27.955 Passthru0 00:11:27.955 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.955 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:27.955 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.955 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.955 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.955 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:27.955 { 00:11:27.955 "aliases": [ 00:11:27.955 "7bf3a2d1-b39e-4392-811e-4630f571c671" 00:11:27.955 ], 00:11:27.955 "assigned_rate_limits": { 00:11:27.955 "r_mbytes_per_sec": 0, 00:11:27.955 "rw_ios_per_sec": 0, 00:11:27.955 "rw_mbytes_per_sec": 0, 00:11:27.955 "w_mbytes_per_sec": 0 00:11:27.955 }, 00:11:27.955 "block_size": 512, 00:11:27.955 "claim_type": "exclusive_write", 00:11:27.955 "claimed": true, 00:11:27.955 "driver_specific": {}, 00:11:27.955 "memory_domains": [ 00:11:27.955 { 00:11:27.955 "dma_device_id": "system", 00:11:27.955 "dma_device_type": 1 00:11:27.955 }, 00:11:27.955 { 00:11:27.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.955 "dma_device_type": 2 00:11:27.955 } 00:11:27.955 ], 00:11:27.955 "name": "Malloc0", 00:11:27.955 "num_blocks": 16384, 00:11:27.955 "product_name": "Malloc disk", 00:11:27.955 "supported_io_types": { 00:11:27.955 "abort": true, 00:11:27.955 "compare": false, 00:11:27.955 "compare_and_write": false, 00:11:27.955 "flush": true, 00:11:27.955 "nvme_admin": false, 00:11:27.955 "nvme_io": false, 00:11:27.955 "read": true, 00:11:27.955 "reset": true, 00:11:27.955 "unmap": true, 00:11:27.955 "write": true, 00:11:27.955 "write_zeroes": true 00:11:27.955 }, 00:11:27.955 "uuid": "7bf3a2d1-b39e-4392-811e-4630f571c671", 00:11:27.955 "zoned": false 00:11:27.955 }, 00:11:27.955 { 00:11:27.955 "aliases": [ 00:11:27.955 "e3984d71-5606-52cc-859c-0ff07228a1ff" 00:11:27.955 ], 00:11:27.955 "assigned_rate_limits": { 00:11:27.955 "r_mbytes_per_sec": 0, 00:11:27.955 "rw_ios_per_sec": 0, 00:11:27.955 "rw_mbytes_per_sec": 0, 00:11:27.955 "w_mbytes_per_sec": 0 00:11:27.955 }, 00:11:27.955 "block_size": 512, 00:11:27.955 "claimed": false, 00:11:27.955 "driver_specific": { 00:11:27.955 "passthru": { 00:11:27.955 "base_bdev_name": "Malloc0", 00:11:27.955 "name": "Passthru0" 00:11:27.955 } 00:11:27.955 }, 00:11:27.955 "memory_domains": [ 00:11:27.955 { 00:11:27.955 "dma_device_id": "system", 00:11:27.955 "dma_device_type": 1 00:11:27.955 }, 00:11:27.955 { 00:11:27.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.955 "dma_device_type": 2 00:11:27.955 } 00:11:27.955 ], 00:11:27.955 "name": "Passthru0", 00:11:27.955 "num_blocks": 16384, 00:11:27.955 "product_name": "passthru", 00:11:27.955 "supported_io_types": { 00:11:27.955 "abort": true, 00:11:27.955 "compare": false, 00:11:27.955 "compare_and_write": false, 00:11:27.955 "flush": true, 00:11:27.955 "nvme_admin": false, 00:11:27.955 "nvme_io": false, 00:11:27.955 "read": true, 00:11:27.955 "reset": true, 00:11:27.955 "unmap": true, 00:11:27.955 "write": true, 00:11:27.955 "write_zeroes": true 00:11:27.955 }, 00:11:27.955 "uuid": "e3984d71-5606-52cc-859c-0ff07228a1ff", 00:11:27.955 "zoned": false 00:11:27.955 } 00:11:27.955 ]' 00:11:27.955 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:27.955 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:27.955 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:27.955 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.955 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.955 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.955 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:27.955 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.955 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.955 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.955 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:27.955 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.955 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.955 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.955 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:27.955 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:28.213 02:12:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:28.213 00:11:28.213 real 0m0.333s 00:11:28.213 user 0m0.225s 00:11:28.213 sys 0m0.030s 00:11:28.213 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:28.213 ************************************ 00:11:28.213 END TEST rpc_integrity 00:11:28.213 02:12:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:28.213 ************************************ 00:11:28.213 02:12:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:28.213 02:12:16 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:28.213 02:12:16 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:28.213 02:12:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.213 ************************************ 00:11:28.213 START TEST rpc_plugins 00:11:28.213 ************************************ 00:11:28.213 02:12:16 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:11:28.213 02:12:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:28.213 02:12:16 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.213 02:12:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:28.213 02:12:16 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.213 02:12:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:28.213 02:12:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:28.213 02:12:16 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.213 02:12:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:28.213 02:12:16 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.213 02:12:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:28.213 { 00:11:28.213 "aliases": [ 00:11:28.213 "fc8ac45c-84f7-4555-a847-7fa27d299e07" 00:11:28.213 ], 00:11:28.213 "assigned_rate_limits": { 00:11:28.213 "r_mbytes_per_sec": 0, 00:11:28.213 "rw_ios_per_sec": 0, 00:11:28.213 "rw_mbytes_per_sec": 0, 00:11:28.213 "w_mbytes_per_sec": 0 00:11:28.213 }, 00:11:28.213 "block_size": 4096, 00:11:28.213 "claimed": false, 00:11:28.213 "driver_specific": {}, 00:11:28.213 "memory_domains": [ 00:11:28.213 { 00:11:28.213 "dma_device_id": "system", 00:11:28.213 "dma_device_type": 1 00:11:28.213 }, 00:11:28.213 { 00:11:28.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.213 "dma_device_type": 2 00:11:28.213 } 00:11:28.213 ], 00:11:28.213 "name": "Malloc1", 00:11:28.213 "num_blocks": 256, 00:11:28.213 "product_name": "Malloc disk", 00:11:28.213 "supported_io_types": { 00:11:28.213 "abort": true, 00:11:28.213 "compare": false, 00:11:28.214 "compare_and_write": false, 00:11:28.214 "flush": true, 00:11:28.214 "nvme_admin": false, 00:11:28.214 "nvme_io": false, 00:11:28.214 "read": true, 00:11:28.214 "reset": true, 00:11:28.214 "unmap": true, 00:11:28.214 "write": true, 00:11:28.214 "write_zeroes": true 00:11:28.214 }, 00:11:28.214 "uuid": "fc8ac45c-84f7-4555-a847-7fa27d299e07", 00:11:28.214 "zoned": false 00:11:28.214 } 00:11:28.214 ]' 00:11:28.214 02:12:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:11:28.214 02:12:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:28.214 02:12:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:28.214 02:12:16 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.214 02:12:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:28.214 02:12:16 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.214 02:12:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:28.214 02:12:16 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.214 02:12:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:28.214 02:12:16 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.214 02:12:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:28.214 02:12:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:11:28.214 02:12:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:28.214 00:11:28.214 real 0m0.153s 00:11:28.214 user 0m0.098s 00:11:28.214 sys 0m0.015s 00:11:28.214 ************************************ 00:11:28.214 END TEST rpc_plugins 00:11:28.214 ************************************ 00:11:28.214 02:12:16 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:28.214 02:12:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:28.214 02:12:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:28.214 02:12:16 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:28.214 02:12:16 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:28.214 02:12:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.214 ************************************ 00:11:28.214 START TEST rpc_trace_cmd_test 00:11:28.214 ************************************ 00:11:28.214 02:12:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:11:28.214 02:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:11:28.214 02:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:28.214 02:12:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.214 02:12:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.472 02:12:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.472 02:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:11:28.472 "bdev": { 00:11:28.472 "mask": "0x8", 00:11:28.472 "tpoint_mask": "0xffffffffffffffff" 00:11:28.472 }, 00:11:28.472 "bdev_nvme": { 00:11:28.472 "mask": "0x4000", 00:11:28.472 "tpoint_mask": "0x0" 00:11:28.472 }, 00:11:28.472 "blobfs": { 00:11:28.472 "mask": "0x80", 00:11:28.472 "tpoint_mask": "0x0" 00:11:28.472 }, 00:11:28.472 "dsa": { 00:11:28.472 "mask": "0x200", 00:11:28.472 "tpoint_mask": "0x0" 00:11:28.472 }, 00:11:28.472 "ftl": { 00:11:28.472 "mask": "0x40", 00:11:28.472 "tpoint_mask": "0x0" 00:11:28.472 }, 00:11:28.472 "iaa": { 00:11:28.472 "mask": "0x1000", 00:11:28.472 "tpoint_mask": "0x0" 00:11:28.472 }, 00:11:28.472 "iscsi_conn": { 00:11:28.472 "mask": "0x2", 00:11:28.472 "tpoint_mask": "0x0" 00:11:28.472 }, 00:11:28.472 "nvme_pcie": { 00:11:28.472 "mask": "0x800", 00:11:28.472 "tpoint_mask": "0x0" 00:11:28.472 }, 00:11:28.472 "nvme_tcp": { 00:11:28.472 "mask": "0x2000", 00:11:28.472 "tpoint_mask": "0x0" 00:11:28.472 }, 00:11:28.472 "nvmf_rdma": { 00:11:28.472 "mask": "0x10", 00:11:28.472 "tpoint_mask": "0x0" 00:11:28.472 }, 00:11:28.472 "nvmf_tcp": { 00:11:28.472 "mask": "0x20", 00:11:28.472 "tpoint_mask": "0x0" 00:11:28.472 }, 00:11:28.472 "scsi": { 00:11:28.472 "mask": "0x4", 00:11:28.472 "tpoint_mask": "0x0" 00:11:28.472 }, 00:11:28.472 "sock": { 00:11:28.472 "mask": "0x8000", 00:11:28.472 "tpoint_mask": "0x0" 00:11:28.472 }, 00:11:28.472 "thread": { 00:11:28.472 "mask": "0x400", 00:11:28.472 "tpoint_mask": "0x0" 00:11:28.472 }, 00:11:28.472 "tpoint_group_mask": "0x8", 00:11:28.472 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59904" 00:11:28.472 }' 00:11:28.472 02:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:11:28.472 02:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:11:28.472 02:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:28.472 02:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:28.472 02:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:28.472 02:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:28.472 02:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:28.472 02:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:28.472 02:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:28.472 02:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:28.472 00:11:28.472 real 0m0.273s 00:11:28.472 user 0m0.239s 00:11:28.472 sys 0m0.024s 00:11:28.472 02:12:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:28.472 02:12:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.472 ************************************ 00:11:28.472 END TEST rpc_trace_cmd_test 00:11:28.472 ************************************ 00:11:28.731 02:12:16 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:11:28.731 02:12:16 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:11:28.731 02:12:16 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:28.731 02:12:16 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:28.731 02:12:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.731 ************************************ 00:11:28.731 START TEST go_rpc 00:11:28.731 ************************************ 00:11:28.731 02:12:16 rpc.go_rpc -- common/autotest_common.sh@1121 -- # go_rpc 00:11:28.731 02:12:16 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:11:28.731 02:12:16 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:11:28.731 02:12:16 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:11:28.731 02:12:16 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:11:28.731 02:12:16 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:11:28.731 02:12:16 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.731 02:12:16 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.731 02:12:16 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.731 02:12:16 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:11:28.731 02:12:16 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:11:28.731 02:12:16 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["86b4308b-10af-4a45-8cde-c6ba36c28991"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"86b4308b-10af-4a45-8cde-c6ba36c28991","zoned":false}]' 00:11:28.731 02:12:16 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:11:28.731 02:12:16 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:11:28.731 02:12:16 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:28.731 02:12:16 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.731 02:12:16 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.731 02:12:16 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.731 02:12:16 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:11:28.731 02:12:16 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:11:28.731 02:12:16 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:11:28.731 02:12:16 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:11:28.731 00:11:28.731 real 0m0.197s 00:11:28.731 user 0m0.132s 00:11:28.731 sys 0m0.033s 00:11:28.731 02:12:16 rpc.go_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:28.731 ************************************ 00:11:28.731 END TEST go_rpc 00:11:28.731 02:12:16 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.731 ************************************ 00:11:28.990 02:12:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:28.990 02:12:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:28.990 02:12:16 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:28.990 02:12:16 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:28.990 02:12:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.990 ************************************ 00:11:28.990 START TEST rpc_daemon_integrity 00:11:28.990 ************************************ 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.990 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:28.990 { 00:11:28.990 "aliases": [ 00:11:28.990 "ca7679e2-31e3-41ce-af55-42b99e8645b0" 00:11:28.990 ], 00:11:28.990 "assigned_rate_limits": { 00:11:28.990 "r_mbytes_per_sec": 0, 00:11:28.990 "rw_ios_per_sec": 0, 00:11:28.990 "rw_mbytes_per_sec": 0, 00:11:28.990 "w_mbytes_per_sec": 0 00:11:28.990 }, 00:11:28.990 "block_size": 512, 00:11:28.990 "claimed": false, 00:11:28.990 "driver_specific": {}, 00:11:28.990 "memory_domains": [ 00:11:28.990 { 00:11:28.990 "dma_device_id": "system", 00:11:28.990 "dma_device_type": 1 00:11:28.990 }, 00:11:28.990 { 00:11:28.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.990 "dma_device_type": 2 00:11:28.991 } 00:11:28.991 ], 00:11:28.991 "name": "Malloc3", 00:11:28.991 "num_blocks": 16384, 00:11:28.991 "product_name": "Malloc disk", 00:11:28.991 "supported_io_types": { 00:11:28.991 "abort": true, 00:11:28.991 "compare": false, 00:11:28.991 "compare_and_write": false, 00:11:28.991 "flush": true, 00:11:28.991 "nvme_admin": false, 00:11:28.991 "nvme_io": false, 00:11:28.991 "read": true, 00:11:28.991 "reset": true, 00:11:28.991 "unmap": true, 00:11:28.991 "write": true, 00:11:28.991 "write_zeroes": true 00:11:28.991 }, 00:11:28.991 "uuid": "ca7679e2-31e3-41ce-af55-42b99e8645b0", 00:11:28.991 "zoned": false 00:11:28.991 } 00:11:28.991 ]' 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:28.991 [2024-05-15 02:12:16.913684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:28.991 [2024-05-15 02:12:16.913744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.991 [2024-05-15 02:12:16.913768] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb11af0 00:11:28.991 [2024-05-15 02:12:16.913778] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.991 [2024-05-15 02:12:16.915289] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.991 [2024-05-15 02:12:16.915325] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:28.991 Passthru0 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:28.991 { 00:11:28.991 "aliases": [ 00:11:28.991 "ca7679e2-31e3-41ce-af55-42b99e8645b0" 00:11:28.991 ], 00:11:28.991 "assigned_rate_limits": { 00:11:28.991 "r_mbytes_per_sec": 0, 00:11:28.991 "rw_ios_per_sec": 0, 00:11:28.991 "rw_mbytes_per_sec": 0, 00:11:28.991 "w_mbytes_per_sec": 0 00:11:28.991 }, 00:11:28.991 "block_size": 512, 00:11:28.991 "claim_type": "exclusive_write", 00:11:28.991 "claimed": true, 00:11:28.991 "driver_specific": {}, 00:11:28.991 "memory_domains": [ 00:11:28.991 { 00:11:28.991 "dma_device_id": "system", 00:11:28.991 "dma_device_type": 1 00:11:28.991 }, 00:11:28.991 { 00:11:28.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.991 "dma_device_type": 2 00:11:28.991 } 00:11:28.991 ], 00:11:28.991 "name": "Malloc3", 00:11:28.991 "num_blocks": 16384, 00:11:28.991 "product_name": "Malloc disk", 00:11:28.991 "supported_io_types": { 00:11:28.991 "abort": true, 00:11:28.991 "compare": false, 00:11:28.991 "compare_and_write": false, 00:11:28.991 "flush": true, 00:11:28.991 "nvme_admin": false, 00:11:28.991 "nvme_io": false, 00:11:28.991 "read": true, 00:11:28.991 "reset": true, 00:11:28.991 "unmap": true, 00:11:28.991 "write": true, 00:11:28.991 "write_zeroes": true 00:11:28.991 }, 00:11:28.991 "uuid": "ca7679e2-31e3-41ce-af55-42b99e8645b0", 00:11:28.991 "zoned": false 00:11:28.991 }, 00:11:28.991 { 00:11:28.991 "aliases": [ 00:11:28.991 "518473a9-c6a0-52b0-a61b-22bffd7fa335" 00:11:28.991 ], 00:11:28.991 "assigned_rate_limits": { 00:11:28.991 "r_mbytes_per_sec": 0, 00:11:28.991 "rw_ios_per_sec": 0, 00:11:28.991 "rw_mbytes_per_sec": 0, 00:11:28.991 "w_mbytes_per_sec": 0 00:11:28.991 }, 00:11:28.991 "block_size": 512, 00:11:28.991 "claimed": false, 00:11:28.991 "driver_specific": { 00:11:28.991 "passthru": { 00:11:28.991 "base_bdev_name": "Malloc3", 00:11:28.991 "name": "Passthru0" 00:11:28.991 } 00:11:28.991 }, 00:11:28.991 "memory_domains": [ 00:11:28.991 { 00:11:28.991 "dma_device_id": "system", 00:11:28.991 "dma_device_type": 1 00:11:28.991 }, 00:11:28.991 { 00:11:28.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.991 "dma_device_type": 2 00:11:28.991 } 00:11:28.991 ], 00:11:28.991 "name": "Passthru0", 00:11:28.991 "num_blocks": 16384, 00:11:28.991 "product_name": "passthru", 00:11:28.991 "supported_io_types": { 00:11:28.991 "abort": true, 00:11:28.991 "compare": false, 00:11:28.991 "compare_and_write": false, 00:11:28.991 "flush": true, 00:11:28.991 "nvme_admin": false, 00:11:28.991 "nvme_io": false, 00:11:28.991 "read": true, 00:11:28.991 "reset": true, 00:11:28.991 "unmap": true, 00:11:28.991 "write": true, 00:11:28.991 "write_zeroes": true 00:11:28.991 }, 00:11:28.991 "uuid": "518473a9-c6a0-52b0-a61b-22bffd7fa335", 00:11:28.991 "zoned": false 00:11:28.991 } 00:11:28.991 ]' 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.991 02:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:29.250 02:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.250 02:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:11:29.250 02:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.251 02:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:29.251 02:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.251 02:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:29.251 02:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.251 02:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:29.251 02:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.251 02:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:29.251 02:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:29.251 02:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:29.251 00:11:29.251 real 0m0.312s 00:11:29.251 user 0m0.206s 00:11:29.251 sys 0m0.040s 00:11:29.251 02:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:29.251 02:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:29.251 ************************************ 00:11:29.251 END TEST rpc_daemon_integrity 00:11:29.251 ************************************ 00:11:29.251 02:12:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:29.251 02:12:17 rpc -- rpc/rpc.sh@84 -- # killprocess 59904 00:11:29.251 02:12:17 rpc -- common/autotest_common.sh@946 -- # '[' -z 59904 ']' 00:11:29.251 02:12:17 rpc -- common/autotest_common.sh@950 -- # kill -0 59904 00:11:29.251 02:12:17 rpc -- common/autotest_common.sh@951 -- # uname 00:11:29.251 02:12:17 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:29.251 02:12:17 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59904 00:11:29.251 02:12:17 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:29.251 02:12:17 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:29.251 killing process with pid 59904 00:11:29.251 02:12:17 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59904' 00:11:29.251 02:12:17 rpc -- common/autotest_common.sh@965 -- # kill 59904 00:11:29.251 02:12:17 rpc -- common/autotest_common.sh@970 -- # wait 59904 00:11:29.509 00:11:29.509 real 0m2.290s 00:11:29.509 user 0m3.177s 00:11:29.509 sys 0m0.567s 00:11:29.509 02:12:17 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:29.509 02:12:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.509 ************************************ 00:11:29.509 END TEST rpc 00:11:29.509 ************************************ 00:11:29.509 02:12:17 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:29.509 02:12:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:29.509 02:12:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:29.509 02:12:17 -- common/autotest_common.sh@10 -- # set +x 00:11:29.509 ************************************ 00:11:29.509 START TEST skip_rpc 00:11:29.509 ************************************ 00:11:29.509 02:12:17 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:29.767 * Looking for test storage... 00:11:29.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:29.767 02:12:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:29.767 02:12:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:29.767 02:12:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:11:29.767 02:12:17 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:29.768 02:12:17 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:29.768 02:12:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.768 ************************************ 00:11:29.768 START TEST skip_rpc 00:11:29.768 ************************************ 00:11:29.768 02:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:11:29.768 02:12:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60146 00:11:29.768 02:12:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:11:29.768 02:12:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:29.768 02:12:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:11:29.768 [2024-05-15 02:12:17.614987] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:29.768 [2024-05-15 02:12:17.615080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60146 ] 00:11:29.768 [2024-05-15 02:12:17.746219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.026 [2024-05-15 02:12:17.822795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.343 2024/05/15 02:12:22 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60146 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 60146 ']' 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 60146 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60146 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:35.343 killing process with pid 60146 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60146' 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 60146 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 60146 00:11:35.343 00:11:35.343 real 0m5.320s 00:11:35.343 user 0m5.010s 00:11:35.343 sys 0m0.196s 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:35.343 ************************************ 00:11:35.343 END TEST skip_rpc 00:11:35.343 ************************************ 00:11:35.343 02:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.343 02:12:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:11:35.343 02:12:22 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:35.343 02:12:22 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:35.343 02:12:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.343 ************************************ 00:11:35.343 START TEST skip_rpc_with_json 00:11:35.343 ************************************ 00:11:35.343 02:12:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:11:35.343 02:12:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:11:35.343 02:12:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60233 00:11:35.343 02:12:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:35.343 02:12:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:35.343 02:12:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 60233 00:11:35.343 02:12:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 60233 ']' 00:11:35.343 02:12:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.343 02:12:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:35.343 02:12:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.343 02:12:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:35.343 02:12:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:35.343 [2024-05-15 02:12:23.011003] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:35.343 [2024-05-15 02:12:23.011153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60233 ] 00:11:35.343 [2024-05-15 02:12:23.159211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.343 [2024-05-15 02:12:23.246798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.278 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:36.278 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:11:36.278 02:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:11:36.278 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.278 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:36.278 [2024-05-15 02:12:24.042929] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:11:36.278 2024/05/15 02:12:24 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:11:36.278 request: 00:11:36.278 { 00:11:36.278 "method": "nvmf_get_transports", 00:11:36.278 "params": { 00:11:36.278 "trtype": "tcp" 00:11:36.279 } 00:11:36.279 } 00:11:36.279 Got JSON-RPC error response 00:11:36.279 GoRPCClient: error on JSON-RPC call 00:11:36.279 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:36.279 02:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:11:36.279 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.279 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:36.279 [2024-05-15 02:12:24.055095] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.279 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.279 02:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:11:36.279 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.279 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:36.279 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.279 02:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:36.279 { 00:11:36.279 "subsystems": [ 00:11:36.279 { 00:11:36.279 "subsystem": "keyring", 00:11:36.279 "config": [] 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "subsystem": "iobuf", 00:11:36.279 "config": [ 00:11:36.279 { 00:11:36.279 "method": "iobuf_set_options", 00:11:36.279 "params": { 00:11:36.279 "large_bufsize": 135168, 00:11:36.279 "large_pool_count": 1024, 00:11:36.279 "small_bufsize": 8192, 00:11:36.279 "small_pool_count": 8192 00:11:36.279 } 00:11:36.279 } 00:11:36.279 ] 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "subsystem": "sock", 00:11:36.279 "config": [ 00:11:36.279 { 00:11:36.279 "method": "sock_impl_set_options", 00:11:36.279 "params": { 00:11:36.279 "enable_ktls": false, 00:11:36.279 "enable_placement_id": 0, 00:11:36.279 "enable_quickack": false, 00:11:36.279 "enable_recv_pipe": true, 00:11:36.279 "enable_zerocopy_send_client": false, 00:11:36.279 "enable_zerocopy_send_server": true, 00:11:36.279 "impl_name": "posix", 00:11:36.279 "recv_buf_size": 2097152, 00:11:36.279 "send_buf_size": 2097152, 00:11:36.279 "tls_version": 0, 00:11:36.279 "zerocopy_threshold": 0 00:11:36.279 } 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "method": "sock_impl_set_options", 00:11:36.279 "params": { 00:11:36.279 "enable_ktls": false, 00:11:36.279 "enable_placement_id": 0, 00:11:36.279 "enable_quickack": false, 00:11:36.279 "enable_recv_pipe": true, 00:11:36.279 "enable_zerocopy_send_client": false, 00:11:36.279 "enable_zerocopy_send_server": true, 00:11:36.279 "impl_name": "ssl", 00:11:36.279 "recv_buf_size": 4096, 00:11:36.279 "send_buf_size": 4096, 00:11:36.279 "tls_version": 0, 00:11:36.279 "zerocopy_threshold": 0 00:11:36.279 } 00:11:36.279 } 00:11:36.279 ] 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "subsystem": "vmd", 00:11:36.279 "config": [] 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "subsystem": "accel", 00:11:36.279 "config": [ 00:11:36.279 { 00:11:36.279 "method": "accel_set_options", 00:11:36.279 "params": { 00:11:36.279 "buf_count": 2048, 00:11:36.279 "large_cache_size": 16, 00:11:36.279 "sequence_count": 2048, 00:11:36.279 "small_cache_size": 128, 00:11:36.279 "task_count": 2048 00:11:36.279 } 00:11:36.279 } 00:11:36.279 ] 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "subsystem": "bdev", 00:11:36.279 "config": [ 00:11:36.279 { 00:11:36.279 "method": "bdev_set_options", 00:11:36.279 "params": { 00:11:36.279 "bdev_auto_examine": true, 00:11:36.279 "bdev_io_cache_size": 256, 00:11:36.279 "bdev_io_pool_size": 65535, 00:11:36.279 "iobuf_large_cache_size": 16, 00:11:36.279 "iobuf_small_cache_size": 128 00:11:36.279 } 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "method": "bdev_raid_set_options", 00:11:36.279 "params": { 00:11:36.279 "process_window_size_kb": 1024 00:11:36.279 } 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "method": "bdev_iscsi_set_options", 00:11:36.279 "params": { 00:11:36.279 "timeout_sec": 30 00:11:36.279 } 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "method": "bdev_nvme_set_options", 00:11:36.279 "params": { 00:11:36.279 "action_on_timeout": "none", 00:11:36.279 "allow_accel_sequence": false, 00:11:36.279 "arbitration_burst": 0, 00:11:36.279 "bdev_retry_count": 3, 00:11:36.279 "ctrlr_loss_timeout_sec": 0, 00:11:36.279 "delay_cmd_submit": true, 00:11:36.279 "dhchap_dhgroups": [ 00:11:36.279 "null", 00:11:36.279 "ffdhe2048", 00:11:36.279 "ffdhe3072", 00:11:36.279 "ffdhe4096", 00:11:36.279 "ffdhe6144", 00:11:36.279 "ffdhe8192" 00:11:36.279 ], 00:11:36.279 "dhchap_digests": [ 00:11:36.279 "sha256", 00:11:36.279 "sha384", 00:11:36.279 "sha512" 00:11:36.279 ], 00:11:36.279 "disable_auto_failback": false, 00:11:36.279 "fast_io_fail_timeout_sec": 0, 00:11:36.279 "generate_uuids": false, 00:11:36.279 "high_priority_weight": 0, 00:11:36.279 "io_path_stat": false, 00:11:36.279 "io_queue_requests": 0, 00:11:36.279 "keep_alive_timeout_ms": 10000, 00:11:36.279 "low_priority_weight": 0, 00:11:36.279 "medium_priority_weight": 0, 00:11:36.279 "nvme_adminq_poll_period_us": 10000, 00:11:36.279 "nvme_error_stat": false, 00:11:36.279 "nvme_ioq_poll_period_us": 0, 00:11:36.279 "rdma_cm_event_timeout_ms": 0, 00:11:36.279 "rdma_max_cq_size": 0, 00:11:36.279 "rdma_srq_size": 0, 00:11:36.279 "reconnect_delay_sec": 0, 00:11:36.279 "timeout_admin_us": 0, 00:11:36.279 "timeout_us": 0, 00:11:36.279 "transport_ack_timeout": 0, 00:11:36.279 "transport_retry_count": 4, 00:11:36.279 "transport_tos": 0 00:11:36.279 } 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "method": "bdev_nvme_set_hotplug", 00:11:36.279 "params": { 00:11:36.279 "enable": false, 00:11:36.279 "period_us": 100000 00:11:36.279 } 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "method": "bdev_wait_for_examine" 00:11:36.279 } 00:11:36.279 ] 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "subsystem": "scsi", 00:11:36.279 "config": null 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "subsystem": "scheduler", 00:11:36.279 "config": [ 00:11:36.279 { 00:11:36.279 "method": "framework_set_scheduler", 00:11:36.279 "params": { 00:11:36.279 "name": "static" 00:11:36.279 } 00:11:36.279 } 00:11:36.279 ] 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "subsystem": "vhost_scsi", 00:11:36.279 "config": [] 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "subsystem": "vhost_blk", 00:11:36.279 "config": [] 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "subsystem": "ublk", 00:11:36.279 "config": [] 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "subsystem": "nbd", 00:11:36.279 "config": [] 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "subsystem": "nvmf", 00:11:36.279 "config": [ 00:11:36.279 { 00:11:36.279 "method": "nvmf_set_config", 00:11:36.279 "params": { 00:11:36.279 "admin_cmd_passthru": { 00:11:36.279 "identify_ctrlr": false 00:11:36.279 }, 00:11:36.279 "discovery_filter": "match_any" 00:11:36.279 } 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "method": "nvmf_set_max_subsystems", 00:11:36.279 "params": { 00:11:36.279 "max_subsystems": 1024 00:11:36.279 } 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "method": "nvmf_set_crdt", 00:11:36.279 "params": { 00:11:36.279 "crdt1": 0, 00:11:36.279 "crdt2": 0, 00:11:36.279 "crdt3": 0 00:11:36.279 } 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "method": "nvmf_create_transport", 00:11:36.279 "params": { 00:11:36.279 "abort_timeout_sec": 1, 00:11:36.279 "ack_timeout": 0, 00:11:36.279 "buf_cache_size": 4294967295, 00:11:36.279 "c2h_success": true, 00:11:36.279 "data_wr_pool_size": 0, 00:11:36.279 "dif_insert_or_strip": false, 00:11:36.279 "in_capsule_data_size": 4096, 00:11:36.279 "io_unit_size": 131072, 00:11:36.279 "max_aq_depth": 128, 00:11:36.279 "max_io_qpairs_per_ctrlr": 127, 00:11:36.279 "max_io_size": 131072, 00:11:36.279 "max_queue_depth": 128, 00:11:36.279 "num_shared_buffers": 511, 00:11:36.279 "sock_priority": 0, 00:11:36.279 "trtype": "TCP", 00:11:36.279 "zcopy": false 00:11:36.279 } 00:11:36.279 } 00:11:36.279 ] 00:11:36.279 }, 00:11:36.279 { 00:11:36.279 "subsystem": "iscsi", 00:11:36.279 "config": [ 00:11:36.279 { 00:11:36.279 "method": "iscsi_set_options", 00:11:36.279 "params": { 00:11:36.279 "allow_duplicated_isid": false, 00:11:36.279 "chap_group": 0, 00:11:36.279 "data_out_pool_size": 2048, 00:11:36.279 "default_time2retain": 20, 00:11:36.279 "default_time2wait": 2, 00:11:36.279 "disable_chap": false, 00:11:36.279 "error_recovery_level": 0, 00:11:36.279 "first_burst_length": 8192, 00:11:36.279 "immediate_data": true, 00:11:36.279 "immediate_data_pool_size": 16384, 00:11:36.279 "max_connections_per_session": 2, 00:11:36.279 "max_large_datain_per_connection": 64, 00:11:36.279 "max_queue_depth": 64, 00:11:36.279 "max_r2t_per_connection": 4, 00:11:36.279 "max_sessions": 128, 00:11:36.279 "mutual_chap": false, 00:11:36.279 "node_base": "iqn.2016-06.io.spdk", 00:11:36.279 "nop_in_interval": 30, 00:11:36.279 "nop_timeout": 60, 00:11:36.279 "pdu_pool_size": 36864, 00:11:36.279 "require_chap": false 00:11:36.279 } 00:11:36.279 } 00:11:36.279 ] 00:11:36.279 } 00:11:36.279 ] 00:11:36.279 } 00:11:36.279 02:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:36.280 02:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 60233 00:11:36.280 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 60233 ']' 00:11:36.280 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 60233 00:11:36.280 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:11:36.280 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:36.280 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60233 00:11:36.280 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:36.280 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:36.280 killing process with pid 60233 00:11:36.280 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60233' 00:11:36.280 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 60233 00:11:36.280 02:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 60233 00:11:36.613 02:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60278 00:11:36.613 02:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:11:36.613 02:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 60278 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 60278 ']' 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 60278 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60278 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:41.915 killing process with pid 60278 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60278' 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 60278 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 60278 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:41.915 00:11:41.915 real 0m6.940s 00:11:41.915 user 0m6.841s 00:11:41.915 sys 0m0.525s 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:41.915 ************************************ 00:11:41.915 END TEST skip_rpc_with_json 00:11:41.915 ************************************ 00:11:41.915 02:12:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:11:41.915 02:12:29 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:41.915 02:12:29 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:41.915 02:12:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.915 ************************************ 00:11:41.915 START TEST skip_rpc_with_delay 00:11:41.915 ************************************ 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:41.915 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:42.173 [2024-05-15 02:12:29.963668] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:11:42.173 [2024-05-15 02:12:29.963787] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:11:42.173 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:11:42.173 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:42.173 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:42.173 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:42.173 00:11:42.173 real 0m0.074s 00:11:42.173 user 0m0.048s 00:11:42.173 sys 0m0.025s 00:11:42.173 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:42.173 02:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:11:42.173 ************************************ 00:11:42.173 END TEST skip_rpc_with_delay 00:11:42.173 ************************************ 00:11:42.173 02:12:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:11:42.173 02:12:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:11:42.173 02:12:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:11:42.173 02:12:30 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:42.173 02:12:30 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:42.173 02:12:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.173 ************************************ 00:11:42.173 START TEST exit_on_failed_rpc_init 00:11:42.173 ************************************ 00:11:42.173 02:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:11:42.173 02:12:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60382 00:11:42.173 02:12:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 60382 00:11:42.173 02:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 60382 ']' 00:11:42.173 02:12:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:42.173 02:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.173 02:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:42.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.173 02:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.173 02:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:42.173 02:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:42.173 [2024-05-15 02:12:30.102910] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:42.173 [2024-05-15 02:12:30.103021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60382 ] 00:11:42.432 [2024-05-15 02:12:30.238627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.432 [2024-05-15 02:12:30.309166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.368 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:43.368 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:11:43.368 02:12:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:43.368 02:12:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:43.368 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:11:43.368 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:43.368 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:43.368 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:43.368 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:43.368 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:43.368 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:43.368 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:43.368 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:43.368 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:43.368 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:43.369 [2024-05-15 02:12:31.128524] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:43.369 [2024-05-15 02:12:31.128620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60412 ] 00:11:43.369 [2024-05-15 02:12:31.262880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.369 [2024-05-15 02:12:31.331775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.369 [2024-05-15 02:12:31.331896] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:43.369 [2024-05-15 02:12:31.331914] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:43.369 [2024-05-15 02:12:31.331924] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:43.627 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:11:43.627 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:43.627 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:11:43.627 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:11:43.627 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:11:43.627 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:43.627 02:12:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:43.627 02:12:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 60382 00:11:43.627 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 60382 ']' 00:11:43.627 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 60382 00:11:43.627 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:11:43.627 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:43.627 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60382 00:11:43.627 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:43.627 killing process with pid 60382 00:11:43.628 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:43.628 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60382' 00:11:43.628 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 60382 00:11:43.628 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 60382 00:11:43.886 00:11:43.886 real 0m1.768s 00:11:43.886 user 0m2.157s 00:11:43.886 sys 0m0.319s 00:11:43.886 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:43.886 ************************************ 00:11:43.886 END TEST exit_on_failed_rpc_init 00:11:43.886 02:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:43.886 ************************************ 00:11:43.886 02:12:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:43.886 00:11:43.886 real 0m14.366s 00:11:43.886 user 0m14.166s 00:11:43.886 sys 0m1.213s 00:11:43.886 02:12:31 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:43.886 02:12:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.886 ************************************ 00:11:43.886 END TEST skip_rpc 00:11:43.886 ************************************ 00:11:43.886 02:12:31 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:43.886 02:12:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:43.886 02:12:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:43.886 02:12:31 -- common/autotest_common.sh@10 -- # set +x 00:11:43.886 ************************************ 00:11:43.886 START TEST rpc_client 00:11:43.886 ************************************ 00:11:43.886 02:12:31 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:44.145 * Looking for test storage... 00:11:44.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:11:44.145 02:12:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:11:44.145 OK 00:11:44.145 02:12:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:44.145 00:11:44.145 real 0m0.090s 00:11:44.145 user 0m0.039s 00:11:44.145 sys 0m0.057s 00:11:44.145 02:12:31 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:44.145 02:12:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:11:44.145 ************************************ 00:11:44.145 END TEST rpc_client 00:11:44.145 ************************************ 00:11:44.145 02:12:32 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:44.145 02:12:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:44.145 02:12:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:44.145 02:12:32 -- common/autotest_common.sh@10 -- # set +x 00:11:44.145 ************************************ 00:11:44.145 START TEST json_config 00:11:44.145 ************************************ 00:11:44.145 02:12:32 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:44.145 02:12:32 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.145 02:12:32 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.145 02:12:32 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.145 02:12:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.145 02:12:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.145 02:12:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.145 02:12:32 json_config -- paths/export.sh@5 -- # export PATH 00:11:44.145 02:12:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@47 -- # : 0 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:44.145 02:12:32 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:44.145 INFO: JSON configuration test init 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:11:44.145 02:12:32 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:44.145 02:12:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:11:44.145 02:12:32 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:44.145 02:12:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:44.145 Waiting for target to run... 00:11:44.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:44.145 02:12:32 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:11:44.145 02:12:32 json_config -- json_config/common.sh@9 -- # local app=target 00:11:44.145 02:12:32 json_config -- json_config/common.sh@10 -- # shift 00:11:44.145 02:12:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:44.145 02:12:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:44.145 02:12:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:11:44.145 02:12:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:44.145 02:12:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:44.145 02:12:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60530 00:11:44.145 02:12:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:44.146 02:12:32 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:11:44.146 02:12:32 json_config -- json_config/common.sh@25 -- # waitforlisten 60530 /var/tmp/spdk_tgt.sock 00:11:44.146 02:12:32 json_config -- common/autotest_common.sh@827 -- # '[' -z 60530 ']' 00:11:44.146 02:12:32 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:44.146 02:12:32 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:44.146 02:12:32 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:44.146 02:12:32 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:44.146 02:12:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:44.405 [2024-05-15 02:12:32.183564] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:44.405 [2024-05-15 02:12:32.183697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60530 ] 00:11:44.664 [2024-05-15 02:12:32.507188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.664 [2024-05-15 02:12:32.553336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.624 02:12:33 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:45.624 02:12:33 json_config -- common/autotest_common.sh@860 -- # return 0 00:11:45.624 02:12:33 json_config -- json_config/common.sh@26 -- # echo '' 00:11:45.624 00:11:45.624 02:12:33 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:11:45.624 02:12:33 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:11:45.624 02:12:33 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:45.624 02:12:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:45.624 02:12:33 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:11:45.624 02:12:33 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:11:45.624 02:12:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:45.624 02:12:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:45.624 02:12:33 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:11:45.624 02:12:33 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:11:45.624 02:12:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:11:45.887 02:12:33 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:11:45.887 02:12:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:11:45.887 02:12:33 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:45.887 02:12:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:45.887 02:12:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:11:45.887 02:12:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:11:45.887 02:12:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:11:45.887 02:12:33 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:11:45.887 02:12:33 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:11:45.887 02:12:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:11:46.146 02:12:34 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:11:46.146 02:12:34 json_config -- json_config/json_config.sh@48 -- # local get_types 00:11:46.146 02:12:34 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:11:46.146 02:12:34 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:11:46.146 02:12:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.146 02:12:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:46.146 02:12:34 json_config -- json_config/json_config.sh@55 -- # return 0 00:11:46.146 02:12:34 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:11:46.146 02:12:34 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:11:46.146 02:12:34 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:11:46.146 02:12:34 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:11:46.146 02:12:34 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:11:46.146 02:12:34 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:11:46.146 02:12:34 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:46.146 02:12:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:46.146 02:12:34 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:11:46.146 02:12:34 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:11:46.146 02:12:34 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:11:46.146 02:12:34 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:46.146 02:12:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:46.405 MallocForNvmf0 00:11:46.405 02:12:34 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:46.405 02:12:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:46.663 MallocForNvmf1 00:11:46.922 02:12:34 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:11:46.922 02:12:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:11:47.180 [2024-05-15 02:12:34.995357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.180 02:12:35 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:47.180 02:12:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:47.436 02:12:35 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:47.436 02:12:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:47.706 02:12:35 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:47.706 02:12:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:47.977 02:12:35 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:47.977 02:12:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:48.236 [2024-05-15 02:12:36.071737] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:48.236 [2024-05-15 02:12:36.072014] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:11:48.236 02:12:36 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:11:48.236 02:12:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:48.236 02:12:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:48.236 02:12:36 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:11:48.236 02:12:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:48.236 02:12:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:48.236 02:12:36 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:11:48.236 02:12:36 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:48.236 02:12:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:48.495 MallocBdevForConfigChangeCheck 00:11:48.495 02:12:36 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:11:48.495 02:12:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:48.495 02:12:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:48.495 02:12:36 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:11:48.495 02:12:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:49.062 INFO: shutting down applications... 00:11:49.062 02:12:36 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:11:49.062 02:12:36 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:11:49.062 02:12:36 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:11:49.062 02:12:36 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:11:49.062 02:12:36 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:11:49.320 Calling clear_iscsi_subsystem 00:11:49.320 Calling clear_nvmf_subsystem 00:11:49.320 Calling clear_nbd_subsystem 00:11:49.320 Calling clear_ublk_subsystem 00:11:49.320 Calling clear_vhost_blk_subsystem 00:11:49.320 Calling clear_vhost_scsi_subsystem 00:11:49.320 Calling clear_bdev_subsystem 00:11:49.320 02:12:37 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:11:49.320 02:12:37 json_config -- json_config/json_config.sh@343 -- # count=100 00:11:49.320 02:12:37 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:11:49.320 02:12:37 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:49.320 02:12:37 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:11:49.320 02:12:37 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:11:49.887 02:12:37 json_config -- json_config/json_config.sh@345 -- # break 00:11:49.887 02:12:37 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:11:49.887 02:12:37 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:11:49.887 02:12:37 json_config -- json_config/common.sh@31 -- # local app=target 00:11:49.887 02:12:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:49.887 02:12:37 json_config -- json_config/common.sh@35 -- # [[ -n 60530 ]] 00:11:49.887 02:12:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 60530 00:11:49.887 [2024-05-15 02:12:37.655364] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:49.887 02:12:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:49.887 02:12:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:49.887 02:12:37 json_config -- json_config/common.sh@41 -- # kill -0 60530 00:11:49.887 02:12:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:11:50.453 02:12:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:11:50.453 02:12:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:50.453 02:12:38 json_config -- json_config/common.sh@41 -- # kill -0 60530 00:11:50.453 02:12:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:50.453 02:12:38 json_config -- json_config/common.sh@43 -- # break 00:11:50.453 SPDK target shutdown done 00:11:50.453 02:12:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:50.453 02:12:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:50.453 INFO: relaunching applications... 00:11:50.453 02:12:38 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:11:50.453 02:12:38 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:50.453 02:12:38 json_config -- json_config/common.sh@9 -- # local app=target 00:11:50.453 02:12:38 json_config -- json_config/common.sh@10 -- # shift 00:11:50.453 02:12:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:50.453 02:12:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:50.453 02:12:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:11:50.453 02:12:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:50.453 02:12:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:50.453 02:12:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60810 00:11:50.453 02:12:38 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:50.453 Waiting for target to run... 00:11:50.453 02:12:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:50.453 02:12:38 json_config -- json_config/common.sh@25 -- # waitforlisten 60810 /var/tmp/spdk_tgt.sock 00:11:50.453 02:12:38 json_config -- common/autotest_common.sh@827 -- # '[' -z 60810 ']' 00:11:50.453 02:12:38 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:50.453 02:12:38 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:50.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:50.453 02:12:38 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:50.453 02:12:38 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:50.453 02:12:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:50.453 [2024-05-15 02:12:38.220232] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:50.453 [2024-05-15 02:12:38.220332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60810 ] 00:11:50.711 [2024-05-15 02:12:38.505110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.711 [2024-05-15 02:12:38.550327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.969 [2024-05-15 02:12:38.839961] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.969 [2024-05-15 02:12:38.871855] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:50.969 [2024-05-15 02:12:38.872123] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:11:51.227 02:12:39 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:51.227 02:12:39 json_config -- common/autotest_common.sh@860 -- # return 0 00:11:51.227 00:11:51.227 02:12:39 json_config -- json_config/common.sh@26 -- # echo '' 00:11:51.227 INFO: Checking if target configuration is the same... 00:11:51.227 02:12:39 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:11:51.227 02:12:39 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:11:51.227 02:12:39 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:51.227 02:12:39 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:11:51.227 02:12:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:51.485 + '[' 2 -ne 2 ']' 00:11:51.485 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:51.485 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:51.485 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:51.485 +++ basename /dev/fd/62 00:11:51.485 ++ mktemp /tmp/62.XXX 00:11:51.485 + tmp_file_1=/tmp/62.82T 00:11:51.485 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:51.485 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:51.485 + tmp_file_2=/tmp/spdk_tgt_config.json.7zh 00:11:51.485 + ret=0 00:11:51.485 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:51.744 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:51.744 + diff -u /tmp/62.82T /tmp/spdk_tgt_config.json.7zh 00:11:51.744 + echo 'INFO: JSON config files are the same' 00:11:51.744 INFO: JSON config files are the same 00:11:51.744 + rm /tmp/62.82T /tmp/spdk_tgt_config.json.7zh 00:11:51.744 + exit 0 00:11:51.744 02:12:39 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:11:51.744 INFO: changing configuration and checking if this can be detected... 00:11:51.744 02:12:39 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:11:51.744 02:12:39 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:51.744 02:12:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:52.004 02:12:39 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:11:52.004 02:12:39 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:52.004 02:12:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:52.004 + '[' 2 -ne 2 ']' 00:11:52.004 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:52.004 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:52.004 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:52.004 +++ basename /dev/fd/62 00:11:52.004 ++ mktemp /tmp/62.XXX 00:11:52.004 + tmp_file_1=/tmp/62.V6V 00:11:52.004 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:52.004 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:52.004 + tmp_file_2=/tmp/spdk_tgt_config.json.E3m 00:11:52.004 + ret=0 00:11:52.004 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:52.570 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:52.570 + diff -u /tmp/62.V6V /tmp/spdk_tgt_config.json.E3m 00:11:52.570 + ret=1 00:11:52.570 + echo '=== Start of file: /tmp/62.V6V ===' 00:11:52.570 + cat /tmp/62.V6V 00:11:52.570 + echo '=== End of file: /tmp/62.V6V ===' 00:11:52.570 + echo '' 00:11:52.570 + echo '=== Start of file: /tmp/spdk_tgt_config.json.E3m ===' 00:11:52.570 + cat /tmp/spdk_tgt_config.json.E3m 00:11:52.570 + echo '=== End of file: /tmp/spdk_tgt_config.json.E3m ===' 00:11:52.570 + echo '' 00:11:52.570 + rm /tmp/62.V6V /tmp/spdk_tgt_config.json.E3m 00:11:52.570 + exit 1 00:11:52.570 INFO: configuration change detected. 00:11:52.570 02:12:40 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:11:52.570 02:12:40 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:11:52.570 02:12:40 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:11:52.570 02:12:40 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:52.570 02:12:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:52.570 02:12:40 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:11:52.570 02:12:40 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:11:52.570 02:12:40 json_config -- json_config/json_config.sh@317 -- # [[ -n 60810 ]] 00:11:52.570 02:12:40 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:11:52.570 02:12:40 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:11:52.570 02:12:40 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:52.570 02:12:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:52.570 02:12:40 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:11:52.570 02:12:40 json_config -- json_config/json_config.sh@193 -- # uname -s 00:11:52.570 02:12:40 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:11:52.570 02:12:40 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:11:52.570 02:12:40 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:11:52.570 02:12:40 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:11:52.570 02:12:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.570 02:12:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:52.570 02:12:40 json_config -- json_config/json_config.sh@323 -- # killprocess 60810 00:11:52.570 02:12:40 json_config -- common/autotest_common.sh@946 -- # '[' -z 60810 ']' 00:11:52.570 02:12:40 json_config -- common/autotest_common.sh@950 -- # kill -0 60810 00:11:52.570 02:12:40 json_config -- common/autotest_common.sh@951 -- # uname 00:11:52.828 02:12:40 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:52.828 02:12:40 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60810 00:11:52.828 02:12:40 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:52.828 killing process with pid 60810 00:11:52.828 02:12:40 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:52.828 02:12:40 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60810' 00:11:52.828 02:12:40 json_config -- common/autotest_common.sh@965 -- # kill 60810 00:11:52.828 [2024-05-15 02:12:40.607279] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:52.828 02:12:40 json_config -- common/autotest_common.sh@970 -- # wait 60810 00:11:52.828 02:12:40 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:52.828 02:12:40 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:11:52.828 02:12:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.828 02:12:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:53.087 02:12:40 json_config -- json_config/json_config.sh@328 -- # return 0 00:11:53.087 INFO: Success 00:11:53.087 02:12:40 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:11:53.087 00:11:53.087 real 0m8.838s 00:11:53.087 user 0m13.191s 00:11:53.087 sys 0m1.535s 00:11:53.087 02:12:40 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:53.087 02:12:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:53.087 ************************************ 00:11:53.087 END TEST json_config 00:11:53.087 ************************************ 00:11:53.087 02:12:40 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:53.088 02:12:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:53.088 02:12:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:53.088 02:12:40 -- common/autotest_common.sh@10 -- # set +x 00:11:53.088 ************************************ 00:11:53.088 START TEST json_config_extra_key 00:11:53.088 ************************************ 00:11:53.088 02:12:40 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:53.088 02:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:53.088 02:12:40 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.088 02:12:40 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.088 02:12:40 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.088 02:12:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.088 02:12:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.088 02:12:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.088 02:12:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:11:53.088 02:12:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:53.088 02:12:40 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:53.088 02:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:53.088 02:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:53.088 02:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:53.088 02:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:53.088 02:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:53.088 02:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:53.088 02:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:53.088 02:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:11:53.088 02:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:53.088 02:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:53.088 INFO: launching applications... 00:11:53.088 02:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:53.088 02:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:53.088 02:12:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:11:53.088 02:12:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:11:53.088 02:12:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:53.088 02:12:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:53.088 02:12:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:11:53.088 02:12:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:53.088 02:12:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:53.088 02:12:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=60981 00:11:53.088 Waiting for target to run... 00:11:53.088 02:12:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:53.088 02:12:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 60981 /var/tmp/spdk_tgt.sock 00:11:53.088 02:12:40 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:53.088 02:12:40 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 60981 ']' 00:11:53.088 02:12:40 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:53.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:53.088 02:12:40 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:53.088 02:12:40 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:53.088 02:12:40 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:53.088 02:12:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:53.088 [2024-05-15 02:12:41.048618] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:53.088 [2024-05-15 02:12:41.048743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60981 ] 00:11:53.654 [2024-05-15 02:12:41.362639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.654 [2024-05-15 02:12:41.412583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.219 02:12:42 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:54.219 00:11:54.219 02:12:42 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:11:54.219 02:12:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:11:54.219 INFO: shutting down applications... 00:11:54.219 02:12:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:54.219 02:12:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:54.219 02:12:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:11:54.219 02:12:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:54.219 02:12:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 60981 ]] 00:11:54.219 02:12:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 60981 00:11:54.219 02:12:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:54.219 02:12:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:54.219 02:12:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60981 00:11:54.219 02:12:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:54.784 02:12:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:54.784 02:12:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:54.784 02:12:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60981 00:11:54.784 02:12:42 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:54.784 02:12:42 json_config_extra_key -- json_config/common.sh@43 -- # break 00:11:54.784 SPDK target shutdown done 00:11:54.784 02:12:42 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:54.784 02:12:42 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:54.784 Success 00:11:54.784 02:12:42 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:54.784 00:11:54.784 real 0m1.609s 00:11:54.784 user 0m1.505s 00:11:54.784 sys 0m0.304s 00:11:54.784 02:12:42 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:54.784 02:12:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:54.784 ************************************ 00:11:54.784 END TEST json_config_extra_key 00:11:54.784 ************************************ 00:11:54.784 02:12:42 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:54.784 02:12:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:54.784 02:12:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:54.784 02:12:42 -- common/autotest_common.sh@10 -- # set +x 00:11:54.784 ************************************ 00:11:54.784 START TEST alias_rpc 00:11:54.784 ************************************ 00:11:54.784 02:12:42 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:54.784 * Looking for test storage... 00:11:54.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:54.784 02:12:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:54.784 02:12:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61062 00:11:54.784 02:12:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:54.784 02:12:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61062 00:11:54.784 02:12:42 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 61062 ']' 00:11:54.784 02:12:42 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.784 02:12:42 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:54.784 02:12:42 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.784 02:12:42 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:54.784 02:12:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.784 [2024-05-15 02:12:42.702881] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:54.784 [2024-05-15 02:12:42.703018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61062 ] 00:11:55.042 [2024-05-15 02:12:42.844151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.042 [2024-05-15 02:12:42.908863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.299 02:12:43 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:55.299 02:12:43 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:11:55.299 02:12:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:55.558 02:12:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61062 00:11:55.558 02:12:43 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 61062 ']' 00:11:55.558 02:12:43 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 61062 00:11:55.558 02:12:43 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:11:55.558 02:12:43 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:55.558 02:12:43 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61062 00:11:55.558 02:12:43 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:55.558 02:12:43 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:55.558 killing process with pid 61062 00:11:55.558 02:12:43 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61062' 00:11:55.558 02:12:43 alias_rpc -- common/autotest_common.sh@965 -- # kill 61062 00:11:55.558 02:12:43 alias_rpc -- common/autotest_common.sh@970 -- # wait 61062 00:11:55.815 00:11:55.815 real 0m1.164s 00:11:55.815 user 0m1.397s 00:11:55.815 sys 0m0.317s 00:11:55.815 02:12:43 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:55.815 02:12:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.815 ************************************ 00:11:55.815 END TEST alias_rpc 00:11:55.815 ************************************ 00:11:55.815 02:12:43 -- spdk/autotest.sh@172 -- # [[ 1 -eq 0 ]] 00:11:55.815 02:12:43 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:55.815 02:12:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:55.815 02:12:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:55.815 02:12:43 -- common/autotest_common.sh@10 -- # set +x 00:11:55.815 ************************************ 00:11:55.815 START TEST dpdk_mem_utility 00:11:55.816 ************************************ 00:11:55.816 02:12:43 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:55.816 * Looking for test storage... 00:11:55.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:55.816 02:12:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:56.073 02:12:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61136 00:11:56.073 02:12:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61136 00:11:56.073 02:12:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:56.073 02:12:43 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 61136 ']' 00:11:56.073 02:12:43 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.073 02:12:43 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:56.073 02:12:43 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.073 02:12:43 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:56.073 02:12:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:56.073 [2024-05-15 02:12:43.880416] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:56.073 [2024-05-15 02:12:43.880504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61136 ] 00:11:56.073 [2024-05-15 02:12:44.011563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.073 [2024-05-15 02:12:44.072640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.012 02:12:44 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:57.012 02:12:44 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:11:57.012 02:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:57.012 02:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:57.012 02:12:44 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.012 02:12:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:57.012 { 00:11:57.012 "filename": "/tmp/spdk_mem_dump.txt" 00:11:57.012 } 00:11:57.012 02:12:44 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.012 02:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:57.012 DPDK memory size 814.000000 MiB in 1 heap(s) 00:11:57.012 1 heaps totaling size 814.000000 MiB 00:11:57.012 size: 814.000000 MiB heap id: 0 00:11:57.012 end heaps---------- 00:11:57.012 8 mempools totaling size 598.116089 MiB 00:11:57.012 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:57.012 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:57.012 size: 84.521057 MiB name: bdev_io_61136 00:11:57.012 size: 51.011292 MiB name: evtpool_61136 00:11:57.012 size: 50.003479 MiB name: msgpool_61136 00:11:57.012 size: 21.763794 MiB name: PDU_Pool 00:11:57.012 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:57.012 size: 0.026123 MiB name: Session_Pool 00:11:57.012 end mempools------- 00:11:57.012 6 memzones totaling size 4.142822 MiB 00:11:57.012 size: 1.000366 MiB name: RG_ring_0_61136 00:11:57.012 size: 1.000366 MiB name: RG_ring_1_61136 00:11:57.012 size: 1.000366 MiB name: RG_ring_4_61136 00:11:57.012 size: 1.000366 MiB name: RG_ring_5_61136 00:11:57.012 size: 0.125366 MiB name: RG_ring_2_61136 00:11:57.012 size: 0.015991 MiB name: RG_ring_3_61136 00:11:57.012 end memzones------- 00:11:57.012 02:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:57.012 heap id: 0 total size: 814.000000 MiB number of busy elements: 236 number of free elements: 15 00:11:57.012 list of free elements. size: 12.483643 MiB 00:11:57.012 element at address: 0x200000400000 with size: 1.999512 MiB 00:11:57.012 element at address: 0x200018e00000 with size: 0.999878 MiB 00:11:57.012 element at address: 0x200019000000 with size: 0.999878 MiB 00:11:57.012 element at address: 0x200003e00000 with size: 0.996277 MiB 00:11:57.012 element at address: 0x200031c00000 with size: 0.994446 MiB 00:11:57.012 element at address: 0x200013800000 with size: 0.978699 MiB 00:11:57.012 element at address: 0x200007000000 with size: 0.959839 MiB 00:11:57.012 element at address: 0x200019200000 with size: 0.936584 MiB 00:11:57.012 element at address: 0x200000200000 with size: 0.836853 MiB 00:11:57.012 element at address: 0x20001aa00000 with size: 0.571167 MiB 00:11:57.012 element at address: 0x20000b200000 with size: 0.489258 MiB 00:11:57.012 element at address: 0x200000800000 with size: 0.486877 MiB 00:11:57.012 element at address: 0x200019400000 with size: 0.485657 MiB 00:11:57.012 element at address: 0x200027e00000 with size: 0.397949 MiB 00:11:57.012 element at address: 0x200003a00000 with size: 0.350769 MiB 00:11:57.012 list of standard malloc elements. size: 199.253784 MiB 00:11:57.012 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:11:57.012 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:11:57.012 element at address: 0x200018efff80 with size: 1.000122 MiB 00:11:57.012 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:11:57.012 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:11:57.012 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:11:57.012 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:11:57.012 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:11:57.012 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:11:57.012 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003adb300 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003adb500 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003affa80 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003affb40 with size: 0.000183 MiB 00:11:57.012 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:11:57.012 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:11:57.013 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:11:57.013 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:11:57.013 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:11:57.013 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e65e00 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6cac0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:11:57.013 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:11:57.013 list of memzone associated elements. size: 602.262573 MiB 00:11:57.013 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:11:57.013 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:57.013 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:11:57.013 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:57.013 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:11:57.013 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61136_0 00:11:57.013 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:11:57.013 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61136_0 00:11:57.013 element at address: 0x200003fff380 with size: 48.003052 MiB 00:11:57.013 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61136_0 00:11:57.013 element at address: 0x2000195be940 with size: 20.255554 MiB 00:11:57.013 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:57.013 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:11:57.013 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:57.013 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:11:57.014 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61136 00:11:57.014 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:11:57.014 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61136 00:11:57.014 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:11:57.014 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61136 00:11:57.014 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:11:57.014 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:57.014 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:11:57.014 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:57.014 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:11:57.014 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:57.014 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:11:57.014 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:57.014 element at address: 0x200003eff180 with size: 1.000488 MiB 00:11:57.014 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61136 00:11:57.014 element at address: 0x200003affc00 with size: 1.000488 MiB 00:11:57.014 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61136 00:11:57.014 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:11:57.014 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61136 00:11:57.014 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:11:57.014 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61136 00:11:57.014 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:11:57.014 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61136 00:11:57.014 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:11:57.014 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:57.014 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:11:57.014 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:57.014 element at address: 0x20001947c540 with size: 0.250488 MiB 00:11:57.014 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:57.014 element at address: 0x200003adf880 with size: 0.125488 MiB 00:11:57.014 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61136 00:11:57.014 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:11:57.014 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:57.014 element at address: 0x200027e65f80 with size: 0.023743 MiB 00:11:57.014 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:57.014 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:11:57.014 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61136 00:11:57.014 element at address: 0x200027e6c0c0 with size: 0.002441 MiB 00:11:57.014 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:57.014 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:11:57.014 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61136 00:11:57.014 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:11:57.014 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61136 00:11:57.014 element at address: 0x200027e6cb80 with size: 0.000305 MiB 00:11:57.014 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:57.014 02:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:57.014 02:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61136 00:11:57.014 02:12:44 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 61136 ']' 00:11:57.014 02:12:44 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 61136 00:11:57.014 02:12:44 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:11:57.014 02:12:44 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:57.014 02:12:44 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61136 00:11:57.014 02:12:44 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:57.014 02:12:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:57.014 killing process with pid 61136 00:11:57.014 02:12:44 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61136' 00:11:57.014 02:12:44 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 61136 00:11:57.014 02:12:44 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 61136 00:11:57.581 00:11:57.581 real 0m1.527s 00:11:57.581 user 0m1.758s 00:11:57.581 sys 0m0.301s 00:11:57.581 02:12:45 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:57.581 02:12:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:57.581 ************************************ 00:11:57.581 END TEST dpdk_mem_utility 00:11:57.581 ************************************ 00:11:57.581 02:12:45 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:57.581 02:12:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:57.581 02:12:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:57.581 02:12:45 -- common/autotest_common.sh@10 -- # set +x 00:11:57.581 ************************************ 00:11:57.581 START TEST event 00:11:57.581 ************************************ 00:11:57.581 02:12:45 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:57.581 * Looking for test storage... 00:11:57.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:57.581 02:12:45 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:57.581 02:12:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:11:57.581 02:12:45 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:57.581 02:12:45 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:11:57.581 02:12:45 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:57.581 02:12:45 event -- common/autotest_common.sh@10 -- # set +x 00:11:57.581 ************************************ 00:11:57.581 START TEST event_perf 00:11:57.581 ************************************ 00:11:57.581 02:12:45 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:57.581 Running I/O for 1 seconds...[2024-05-15 02:12:45.429459] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:57.581 [2024-05-15 02:12:45.429548] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61225 ] 00:11:57.581 [2024-05-15 02:12:45.571307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.839 [2024-05-15 02:12:45.647581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.839 [2024-05-15 02:12:45.647724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.839 [2024-05-15 02:12:45.647768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.839 [2024-05-15 02:12:45.647774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.774 Running I/O for 1 seconds... 00:11:58.774 lcore 0: 178406 00:11:58.774 lcore 1: 178405 00:11:58.774 lcore 2: 178407 00:11:58.774 lcore 3: 178408 00:11:58.774 done. 00:11:58.774 00:11:58.774 real 0m1.334s 00:11:58.774 user 0m4.157s 00:11:58.774 sys 0m0.051s 00:11:58.774 02:12:46 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:58.774 ************************************ 00:11:58.774 02:12:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:11:58.774 END TEST event_perf 00:11:58.774 ************************************ 00:11:58.774 02:12:46 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:58.774 02:12:46 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:58.774 02:12:46 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:58.774 02:12:46 event -- common/autotest_common.sh@10 -- # set +x 00:11:59.043 ************************************ 00:11:59.043 START TEST event_reactor 00:11:59.043 ************************************ 00:11:59.043 02:12:46 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:59.043 [2024-05-15 02:12:46.812914] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:59.043 [2024-05-15 02:12:46.813036] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61264 ] 00:11:59.043 [2024-05-15 02:12:46.956369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.043 [2024-05-15 02:12:47.024464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.419 test_start 00:12:00.419 oneshot 00:12:00.419 tick 100 00:12:00.419 tick 100 00:12:00.419 tick 250 00:12:00.419 tick 100 00:12:00.419 tick 100 00:12:00.419 tick 100 00:12:00.419 tick 250 00:12:00.419 tick 500 00:12:00.419 tick 100 00:12:00.419 tick 100 00:12:00.419 tick 250 00:12:00.419 tick 100 00:12:00.419 tick 100 00:12:00.419 test_end 00:12:00.419 00:12:00.419 real 0m1.324s 00:12:00.419 user 0m1.168s 00:12:00.419 sys 0m0.049s 00:12:00.419 02:12:48 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:00.419 02:12:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:12:00.419 ************************************ 00:12:00.419 END TEST event_reactor 00:12:00.419 ************************************ 00:12:00.419 02:12:48 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:00.419 02:12:48 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:00.419 02:12:48 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:00.419 02:12:48 event -- common/autotest_common.sh@10 -- # set +x 00:12:00.419 ************************************ 00:12:00.419 START TEST event_reactor_perf 00:12:00.419 ************************************ 00:12:00.419 02:12:48 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:00.419 [2024-05-15 02:12:48.178447] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:00.419 [2024-05-15 02:12:48.178559] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61294 ] 00:12:00.419 [2024-05-15 02:12:48.322693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.419 [2024-05-15 02:12:48.393928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.795 test_start 00:12:01.795 test_end 00:12:01.795 Performance: 337887 events per second 00:12:01.795 00:12:01.795 real 0m1.328s 00:12:01.795 user 0m1.173s 00:12:01.795 sys 0m0.049s 00:12:01.795 02:12:49 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:01.795 02:12:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:12:01.795 ************************************ 00:12:01.795 END TEST event_reactor_perf 00:12:01.795 ************************************ 00:12:01.795 02:12:49 event -- event/event.sh@49 -- # uname -s 00:12:01.795 02:12:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:12:01.795 02:12:49 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:01.795 02:12:49 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:01.795 02:12:49 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:01.795 02:12:49 event -- common/autotest_common.sh@10 -- # set +x 00:12:01.795 ************************************ 00:12:01.795 START TEST event_scheduler 00:12:01.795 ************************************ 00:12:01.795 02:12:49 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:01.795 * Looking for test storage... 00:12:01.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:12:01.795 02:12:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:12:01.795 02:12:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=61355 00:12:01.795 02:12:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:12:01.795 02:12:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:12:01.795 02:12:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 61355 00:12:01.795 02:12:49 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 61355 ']' 00:12:01.795 02:12:49 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.795 02:12:49 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:01.795 02:12:49 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.795 02:12:49 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:01.795 02:12:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:01.795 [2024-05-15 02:12:49.650250] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:01.795 [2024-05-15 02:12:49.650342] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61355 ] 00:12:01.795 [2024-05-15 02:12:49.787525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.053 [2024-05-15 02:12:49.849743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.053 [2024-05-15 02:12:49.853350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.053 [2024-05-15 02:12:49.853497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.053 [2024-05-15 02:12:49.853510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.620 02:12:50 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:02.620 02:12:50 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:12:02.620 02:12:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:12:02.620 02:12:50 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.620 02:12:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:02.620 POWER: Env isn't set yet! 00:12:02.620 POWER: Attempting to initialise ACPI cpufreq power management... 00:12:02.620 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:02.620 POWER: Cannot set governor of lcore 0 to userspace 00:12:02.620 POWER: Attempting to initialise PSTAT power management... 00:12:02.620 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:02.620 POWER: Cannot set governor of lcore 0 to performance 00:12:02.620 POWER: Attempting to initialise AMD PSTATE power management... 00:12:02.620 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:02.620 POWER: Cannot set governor of lcore 0 to userspace 00:12:02.620 POWER: Attempting to initialise CPPC power management... 00:12:02.620 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:02.620 POWER: Cannot set governor of lcore 0 to userspace 00:12:02.620 POWER: Attempting to initialise VM power management... 00:12:02.620 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:12:02.620 POWER: Unable to set Power Management Environment for lcore 0 00:12:02.620 [2024-05-15 02:12:50.630855] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:12:02.620 [2024-05-15 02:12:50.630873] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:12:02.620 [2024-05-15 02:12:50.630887] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:12:02.879 02:12:50 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.879 02:12:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:12:02.879 02:12:50 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.879 02:12:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:02.879 [2024-05-15 02:12:50.686897] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:12:02.879 02:12:50 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.879 02:12:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:12:02.879 02:12:50 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:02.879 02:12:50 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:02.879 02:12:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:02.879 ************************************ 00:12:02.879 START TEST scheduler_create_thread 00:12:02.879 ************************************ 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:02.879 2 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:02.879 3 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:02.879 4 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:02.879 5 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:02.879 6 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:02.879 7 00:12:02.879 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:02.880 8 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:02.880 9 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:02.880 10 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.880 02:12:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:04.256 02:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.256 00:12:04.256 real 0m1.168s 00:12:04.256 user 0m0.020s 00:12:04.256 sys 0m0.003s 00:12:04.256 02:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:04.256 02:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:04.256 ************************************ 00:12:04.256 END TEST scheduler_create_thread 00:12:04.256 ************************************ 00:12:04.256 02:12:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:04.256 02:12:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 61355 00:12:04.256 02:12:51 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 61355 ']' 00:12:04.256 02:12:51 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 61355 00:12:04.256 02:12:51 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:12:04.256 02:12:51 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:04.256 02:12:51 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61355 00:12:04.256 02:12:51 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:12:04.256 02:12:51 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:12:04.256 killing process with pid 61355 00:12:04.256 02:12:51 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61355' 00:12:04.256 02:12:51 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 61355 00:12:04.256 02:12:51 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 61355 00:12:04.516 [2024-05-15 02:12:52.345059] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:12:04.779 00:12:04.779 real 0m2.999s 00:12:04.779 user 0m5.620s 00:12:04.779 sys 0m0.294s 00:12:04.779 02:12:52 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:04.779 02:12:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:04.779 ************************************ 00:12:04.779 END TEST event_scheduler 00:12:04.779 ************************************ 00:12:04.779 02:12:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:12:04.779 02:12:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:12:04.779 02:12:52 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:04.779 02:12:52 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:04.779 02:12:52 event -- common/autotest_common.sh@10 -- # set +x 00:12:04.779 ************************************ 00:12:04.779 START TEST app_repeat 00:12:04.779 ************************************ 00:12:04.779 02:12:52 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:12:04.779 02:12:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.779 02:12:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:04.779 02:12:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:12:04.779 02:12:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:04.779 02:12:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:12:04.779 02:12:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:12:04.779 02:12:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:12:04.779 02:12:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=61456 00:12:04.779 02:12:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:12:04.779 Process app_repeat pid: 61456 00:12:04.779 02:12:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61456' 00:12:04.779 02:12:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:04.779 spdk_app_start Round 0 00:12:04.779 02:12:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:12:04.779 02:12:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61456 /var/tmp/spdk-nbd.sock 00:12:04.779 02:12:52 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 61456 ']' 00:12:04.779 02:12:52 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:04.779 02:12:52 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:04.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:04.779 02:12:52 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:04.779 02:12:52 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:04.779 02:12:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:04.779 02:12:52 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:12:04.779 [2024-05-15 02:12:52.608745] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:04.779 [2024-05-15 02:12:52.608824] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61456 ] 00:12:04.779 [2024-05-15 02:12:52.741444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:05.054 [2024-05-15 02:12:52.824419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.054 [2024-05-15 02:12:52.824470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.620 02:12:53 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:05.620 02:12:53 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:05.620 02:12:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:06.186 Malloc0 00:12:06.186 02:12:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:06.186 Malloc1 00:12:06.445 02:12:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:06.445 02:12:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:06.445 02:12:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:06.445 02:12:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:06.445 02:12:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:06.445 02:12:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:06.445 02:12:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:06.445 02:12:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:06.445 02:12:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:06.445 02:12:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:06.445 02:12:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:06.445 02:12:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:06.445 02:12:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:06.445 02:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:06.445 02:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:06.445 02:12:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:06.704 /dev/nbd0 00:12:06.704 02:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:06.704 02:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:06.704 02:12:54 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:06.704 02:12:54 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:06.704 02:12:54 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:06.704 02:12:54 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:06.704 02:12:54 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:06.704 02:12:54 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:06.704 02:12:54 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:06.704 02:12:54 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:06.704 02:12:54 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:06.704 1+0 records in 00:12:06.704 1+0 records out 00:12:06.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348355 s, 11.8 MB/s 00:12:06.704 02:12:54 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:06.704 02:12:54 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:06.704 02:12:54 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:06.704 02:12:54 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:06.704 02:12:54 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:06.704 02:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.704 02:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:06.704 02:12:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:06.964 /dev/nbd1 00:12:06.964 02:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:06.964 02:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:06.964 02:12:54 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:06.964 02:12:54 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:06.964 02:12:54 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:06.964 02:12:54 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:06.964 02:12:54 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:06.964 02:12:54 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:06.964 02:12:54 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:06.964 02:12:54 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:06.964 02:12:54 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:06.964 1+0 records in 00:12:06.964 1+0 records out 00:12:06.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309111 s, 13.3 MB/s 00:12:06.964 02:12:54 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:06.964 02:12:54 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:06.964 02:12:54 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:06.964 02:12:54 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:06.964 02:12:54 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:06.964 02:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.964 02:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:06.964 02:12:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:06.964 02:12:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:06.964 02:12:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:07.223 { 00:12:07.223 "bdev_name": "Malloc0", 00:12:07.223 "nbd_device": "/dev/nbd0" 00:12:07.223 }, 00:12:07.223 { 00:12:07.223 "bdev_name": "Malloc1", 00:12:07.223 "nbd_device": "/dev/nbd1" 00:12:07.223 } 00:12:07.223 ]' 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:07.223 { 00:12:07.223 "bdev_name": "Malloc0", 00:12:07.223 "nbd_device": "/dev/nbd0" 00:12:07.223 }, 00:12:07.223 { 00:12:07.223 "bdev_name": "Malloc1", 00:12:07.223 "nbd_device": "/dev/nbd1" 00:12:07.223 } 00:12:07.223 ]' 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:07.223 /dev/nbd1' 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:07.223 /dev/nbd1' 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:07.223 256+0 records in 00:12:07.223 256+0 records out 00:12:07.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00752724 s, 139 MB/s 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:07.223 02:12:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:07.481 256+0 records in 00:12:07.481 256+0 records out 00:12:07.481 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257106 s, 40.8 MB/s 00:12:07.481 02:12:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:07.481 02:12:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:07.481 256+0 records in 00:12:07.481 256+0 records out 00:12:07.481 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290094 s, 36.1 MB/s 00:12:07.481 02:12:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:07.482 02:12:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:07.739 02:12:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:07.739 02:12:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:07.739 02:12:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:07.739 02:12:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.739 02:12:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.739 02:12:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:07.739 02:12:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:07.739 02:12:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.739 02:12:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:07.739 02:12:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:07.997 02:12:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:07.997 02:12:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:07.997 02:12:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:07.997 02:12:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.997 02:12:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.997 02:12:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:07.997 02:12:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:07.997 02:12:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.997 02:12:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:07.997 02:12:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:07.997 02:12:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:08.256 02:12:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:08.256 02:12:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:08.256 02:12:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:08.256 02:12:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:08.256 02:12:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:08.256 02:12:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:08.256 02:12:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:08.256 02:12:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:08.256 02:12:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:08.256 02:12:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:08.256 02:12:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:08.256 02:12:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:08.256 02:12:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:08.515 02:12:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:08.790 [2024-05-15 02:12:56.617248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:08.790 [2024-05-15 02:12:56.675673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.790 [2024-05-15 02:12:56.675681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.790 [2024-05-15 02:12:56.705960] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:08.790 [2024-05-15 02:12:56.706022] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:12.130 02:12:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:12.130 spdk_app_start Round 1 00:12:12.130 02:12:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:12:12.130 02:12:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61456 /var/tmp/spdk-nbd.sock 00:12:12.130 02:12:59 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 61456 ']' 00:12:12.130 02:12:59 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:12.130 02:12:59 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:12.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:12.130 02:12:59 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:12.130 02:12:59 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:12.130 02:12:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:12.130 02:12:59 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:12.130 02:12:59 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:12.130 02:12:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:12.130 Malloc0 00:12:12.130 02:13:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:12.388 Malloc1 00:12:12.388 02:13:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:12.388 02:13:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.388 02:13:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:12.388 02:13:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:12.388 02:13:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:12.388 02:13:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:12.388 02:13:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:12.388 02:13:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.388 02:13:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:12.388 02:13:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:12.388 02:13:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:12.388 02:13:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:12.388 02:13:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:12.388 02:13:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:12.388 02:13:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:12.388 02:13:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:12.956 /dev/nbd0 00:12:12.956 02:13:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:12.956 02:13:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:12.956 02:13:00 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:12.956 02:13:00 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:12.956 02:13:00 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:12.956 02:13:00 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:12.956 02:13:00 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:12.956 02:13:00 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:12.956 02:13:00 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:12.956 02:13:00 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:12.956 02:13:00 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:12.956 1+0 records in 00:12:12.956 1+0 records out 00:12:12.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328571 s, 12.5 MB/s 00:12:12.956 02:13:00 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:12.956 02:13:00 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:12.956 02:13:00 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:12.956 02:13:00 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:12.956 02:13:00 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:12.956 02:13:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.956 02:13:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:12.956 02:13:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:13.214 /dev/nbd1 00:12:13.214 02:13:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:13.214 02:13:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:13.214 02:13:01 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:13.214 02:13:01 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:13.214 02:13:01 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:13.214 02:13:01 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:13.214 02:13:01 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:13.214 02:13:01 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:13.214 02:13:01 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:13.214 02:13:01 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:13.214 02:13:01 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:13.214 1+0 records in 00:12:13.214 1+0 records out 00:12:13.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036273 s, 11.3 MB/s 00:12:13.214 02:13:01 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:13.214 02:13:01 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:13.214 02:13:01 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:13.214 02:13:01 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:13.214 02:13:01 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:13.214 02:13:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.214 02:13:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:13.214 02:13:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:13.214 02:13:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:13.214 02:13:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:13.472 02:13:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:13.472 { 00:12:13.472 "bdev_name": "Malloc0", 00:12:13.472 "nbd_device": "/dev/nbd0" 00:12:13.472 }, 00:12:13.472 { 00:12:13.472 "bdev_name": "Malloc1", 00:12:13.472 "nbd_device": "/dev/nbd1" 00:12:13.472 } 00:12:13.472 ]' 00:12:13.472 02:13:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:13.472 { 00:12:13.472 "bdev_name": "Malloc0", 00:12:13.472 "nbd_device": "/dev/nbd0" 00:12:13.472 }, 00:12:13.472 { 00:12:13.472 "bdev_name": "Malloc1", 00:12:13.472 "nbd_device": "/dev/nbd1" 00:12:13.472 } 00:12:13.472 ]' 00:12:13.472 02:13:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:13.472 02:13:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:13.472 /dev/nbd1' 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:13.473 /dev/nbd1' 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:13.473 256+0 records in 00:12:13.473 256+0 records out 00:12:13.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00563105 s, 186 MB/s 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:13.473 256+0 records in 00:12:13.473 256+0 records out 00:12:13.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329531 s, 31.8 MB/s 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:13.473 256+0 records in 00:12:13.473 256+0 records out 00:12:13.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293941 s, 35.7 MB/s 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:13.473 02:13:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:13.732 02:13:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:13.732 02:13:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:13.732 02:13:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:13.732 02:13:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:13.732 02:13:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:13.732 02:13:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.732 02:13:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:13.990 02:13:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:13.990 02:13:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:13.990 02:13:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:13.990 02:13:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.990 02:13:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.990 02:13:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:13.990 02:13:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:13.990 02:13:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.990 02:13:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.990 02:13:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:13.990 02:13:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:14.249 02:13:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:14.249 02:13:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:14.249 02:13:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.249 02:13:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.249 02:13:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:14.249 02:13:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:14.249 02:13:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.249 02:13:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:14.249 02:13:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:14.249 02:13:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:14.506 02:13:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:14.506 02:13:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:14.506 02:13:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:14.506 02:13:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:14.506 02:13:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:14.506 02:13:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:14.506 02:13:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:14.506 02:13:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:14.506 02:13:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:14.506 02:13:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:14.506 02:13:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:14.506 02:13:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:14.507 02:13:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:14.765 02:13:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:15.023 [2024-05-15 02:13:02.810990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:15.023 [2024-05-15 02:13:02.870517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.023 [2024-05-15 02:13:02.870523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.023 [2024-05-15 02:13:02.902382] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:15.023 [2024-05-15 02:13:02.902496] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:18.309 02:13:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:18.309 spdk_app_start Round 2 00:12:18.309 02:13:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:18.310 02:13:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61456 /var/tmp/spdk-nbd.sock 00:12:18.310 02:13:05 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 61456 ']' 00:12:18.310 02:13:05 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:18.310 02:13:05 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:18.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:18.310 02:13:05 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:18.310 02:13:05 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:18.310 02:13:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:18.310 02:13:05 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:18.310 02:13:05 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:18.310 02:13:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:18.310 Malloc0 00:12:18.310 02:13:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:18.587 Malloc1 00:12:18.587 02:13:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:18.587 02:13:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:18.587 02:13:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:18.587 02:13:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:18.587 02:13:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:18.587 02:13:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:18.587 02:13:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:18.587 02:13:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:18.587 02:13:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:18.587 02:13:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:18.587 02:13:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:18.587 02:13:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:18.587 02:13:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:18.587 02:13:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:18.587 02:13:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:18.587 02:13:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:18.847 /dev/nbd0 00:12:18.847 02:13:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:18.847 02:13:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:18.847 02:13:06 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:18.847 02:13:06 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:18.847 02:13:06 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:18.847 02:13:06 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:18.847 02:13:06 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:18.847 02:13:06 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:18.847 02:13:06 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:18.847 02:13:06 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:18.847 02:13:06 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:18.847 1+0 records in 00:12:18.847 1+0 records out 00:12:18.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251349 s, 16.3 MB/s 00:12:18.847 02:13:06 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:18.847 02:13:06 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:18.847 02:13:06 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:18.847 02:13:06 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:18.847 02:13:06 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:18.847 02:13:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:18.847 02:13:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:18.847 02:13:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:19.416 /dev/nbd1 00:12:19.416 02:13:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:19.416 02:13:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:19.416 02:13:07 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:19.416 02:13:07 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:19.416 02:13:07 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:19.416 02:13:07 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:19.416 02:13:07 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:19.416 02:13:07 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:19.416 02:13:07 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:19.416 02:13:07 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:19.416 02:13:07 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:19.416 1+0 records in 00:12:19.416 1+0 records out 00:12:19.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273281 s, 15.0 MB/s 00:12:19.416 02:13:07 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:19.416 02:13:07 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:19.416 02:13:07 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:19.416 02:13:07 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:19.416 02:13:07 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:19.416 02:13:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.416 02:13:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:19.416 02:13:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:19.416 02:13:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:19.416 02:13:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:19.675 { 00:12:19.675 "bdev_name": "Malloc0", 00:12:19.675 "nbd_device": "/dev/nbd0" 00:12:19.675 }, 00:12:19.675 { 00:12:19.675 "bdev_name": "Malloc1", 00:12:19.675 "nbd_device": "/dev/nbd1" 00:12:19.675 } 00:12:19.675 ]' 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:19.675 { 00:12:19.675 "bdev_name": "Malloc0", 00:12:19.675 "nbd_device": "/dev/nbd0" 00:12:19.675 }, 00:12:19.675 { 00:12:19.675 "bdev_name": "Malloc1", 00:12:19.675 "nbd_device": "/dev/nbd1" 00:12:19.675 } 00:12:19.675 ]' 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:19.675 /dev/nbd1' 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:19.675 /dev/nbd1' 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:19.675 256+0 records in 00:12:19.675 256+0 records out 00:12:19.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00815544 s, 129 MB/s 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:19.675 256+0 records in 00:12:19.675 256+0 records out 00:12:19.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267744 s, 39.2 MB/s 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:19.675 256+0 records in 00:12:19.675 256+0 records out 00:12:19.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284845 s, 36.8 MB/s 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:19.675 02:13:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:19.676 02:13:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:19.676 02:13:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:19.676 02:13:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:19.676 02:13:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:19.676 02:13:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:19.676 02:13:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.676 02:13:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:19.934 02:13:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:19.934 02:13:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:19.934 02:13:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:19.934 02:13:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.934 02:13:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.934 02:13:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:19.934 02:13:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:19.934 02:13:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.934 02:13:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.934 02:13:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:20.192 02:13:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:20.192 02:13:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:20.192 02:13:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:20.192 02:13:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.192 02:13:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.192 02:13:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:20.192 02:13:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:20.192 02:13:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.192 02:13:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:20.192 02:13:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:20.192 02:13:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:20.450 02:13:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:20.450 02:13:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:20.450 02:13:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:20.450 02:13:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:20.450 02:13:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:20.450 02:13:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:20.450 02:13:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:20.450 02:13:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:20.450 02:13:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:20.450 02:13:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:20.708 02:13:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:20.708 02:13:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:20.708 02:13:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:20.966 02:13:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:20.966 [2024-05-15 02:13:08.915030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:20.966 [2024-05-15 02:13:08.974123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.967 [2024-05-15 02:13:08.974135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.225 [2024-05-15 02:13:09.005942] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:21.225 [2024-05-15 02:13:09.006007] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:24.507 02:13:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 61456 /var/tmp/spdk-nbd.sock 00:12:24.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:24.507 02:13:11 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 61456 ']' 00:12:24.507 02:13:11 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:24.507 02:13:11 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:24.507 02:13:11 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:24.507 02:13:11 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:24.507 02:13:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:24.507 02:13:12 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:24.507 02:13:12 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:24.507 02:13:12 event.app_repeat -- event/event.sh@39 -- # killprocess 61456 00:12:24.507 02:13:12 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 61456 ']' 00:12:24.507 02:13:12 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 61456 00:12:24.507 02:13:12 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:12:24.507 02:13:12 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:24.507 02:13:12 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61456 00:12:24.507 killing process with pid 61456 00:12:24.507 02:13:12 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:24.507 02:13:12 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:24.507 02:13:12 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61456' 00:12:24.507 02:13:12 event.app_repeat -- common/autotest_common.sh@965 -- # kill 61456 00:12:24.507 02:13:12 event.app_repeat -- common/autotest_common.sh@970 -- # wait 61456 00:12:24.507 spdk_app_start is called in Round 0. 00:12:24.507 Shutdown signal received, stop current app iteration 00:12:24.507 Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 reinitialization... 00:12:24.507 spdk_app_start is called in Round 1. 00:12:24.507 Shutdown signal received, stop current app iteration 00:12:24.507 Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 reinitialization... 00:12:24.507 spdk_app_start is called in Round 2. 00:12:24.507 Shutdown signal received, stop current app iteration 00:12:24.507 Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 reinitialization... 00:12:24.507 spdk_app_start is called in Round 3. 00:12:24.507 Shutdown signal received, stop current app iteration 00:12:24.507 02:13:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:24.507 02:13:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:12:24.507 00:12:24.507 real 0m19.701s 00:12:24.507 user 0m44.844s 00:12:24.507 sys 0m2.864s 00:12:24.507 02:13:12 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:24.507 ************************************ 00:12:24.507 END TEST app_repeat 00:12:24.507 ************************************ 00:12:24.507 02:13:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:24.507 02:13:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:24.507 02:13:12 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:24.507 02:13:12 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:24.507 02:13:12 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:24.507 02:13:12 event -- common/autotest_common.sh@10 -- # set +x 00:12:24.507 ************************************ 00:12:24.507 START TEST cpu_locks 00:12:24.507 ************************************ 00:12:24.507 02:13:12 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:24.507 * Looking for test storage... 00:12:24.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:24.507 02:13:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:24.507 02:13:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:24.507 02:13:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:24.507 02:13:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:24.507 02:13:12 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:24.507 02:13:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:24.507 02:13:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:24.507 ************************************ 00:12:24.507 START TEST default_locks 00:12:24.507 ************************************ 00:12:24.507 02:13:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:12:24.507 02:13:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62094 00:12:24.507 02:13:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62094 00:12:24.507 02:13:12 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 62094 ']' 00:12:24.507 02:13:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:24.507 02:13:12 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.507 02:13:12 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:24.507 02:13:12 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.507 02:13:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:24.507 02:13:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:24.507 [2024-05-15 02:13:12.487036] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:24.507 [2024-05-15 02:13:12.487699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62094 ] 00:12:24.766 [2024-05-15 02:13:12.624728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.766 [2024-05-15 02:13:12.703200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.700 02:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:25.700 02:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:12:25.700 02:13:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62094 00:12:25.700 02:13:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62094 00:12:25.700 02:13:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:25.957 02:13:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62094 00:12:25.957 02:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 62094 ']' 00:12:25.957 02:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 62094 00:12:25.957 02:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:12:25.957 02:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:25.957 02:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62094 00:12:25.957 02:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:25.957 02:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:25.957 killing process with pid 62094 00:12:25.957 02:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62094' 00:12:25.957 02:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 62094 00:12:25.957 02:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 62094 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62094 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62094 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62094 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 62094 ']' 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:26.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:26.524 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (62094) - No such process 00:12:26.524 ERROR: process (pid: 62094) is no longer running 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:26.524 00:12:26.524 real 0m1.817s 00:12:26.524 user 0m2.074s 00:12:26.524 sys 0m0.494s 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:26.524 02:13:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:26.524 ************************************ 00:12:26.524 END TEST default_locks 00:12:26.524 ************************************ 00:12:26.524 02:13:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:26.524 02:13:14 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:26.524 02:13:14 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:26.524 02:13:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:26.524 ************************************ 00:12:26.524 START TEST default_locks_via_rpc 00:12:26.524 ************************************ 00:12:26.524 02:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:12:26.524 02:13:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62153 00:12:26.524 02:13:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62153 00:12:26.524 02:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62153 ']' 00:12:26.524 02:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.524 02:13:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:26.524 02:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:26.524 02:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.524 02:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:26.524 02:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.525 [2024-05-15 02:13:14.357379] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:26.525 [2024-05-15 02:13:14.357548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62153 ] 00:12:26.525 [2024-05-15 02:13:14.495744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.783 [2024-05-15 02:13:14.569191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62153 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62153 00:12:27.350 02:13:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:27.915 02:13:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62153 00:12:27.915 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 62153 ']' 00:12:27.915 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 62153 00:12:27.915 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:12:27.915 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:27.915 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62153 00:12:27.915 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:27.915 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:27.915 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62153' 00:12:27.915 killing process with pid 62153 00:12:27.915 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 62153 00:12:27.915 02:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 62153 00:12:28.173 00:12:28.173 real 0m1.728s 00:12:28.173 user 0m1.912s 00:12:28.173 sys 0m0.488s 00:12:28.173 02:13:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:28.173 02:13:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.173 ************************************ 00:12:28.173 END TEST default_locks_via_rpc 00:12:28.173 ************************************ 00:12:28.173 02:13:16 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:28.173 02:13:16 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:28.173 02:13:16 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:28.173 02:13:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:28.173 ************************************ 00:12:28.173 START TEST non_locking_app_on_locked_coremask 00:12:28.173 ************************************ 00:12:28.173 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:12:28.173 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62216 00:12:28.173 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:28.173 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62216 /var/tmp/spdk.sock 00:12:28.173 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62216 ']' 00:12:28.173 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.173 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:28.173 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.173 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:28.173 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:28.173 [2024-05-15 02:13:16.118624] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:28.173 [2024-05-15 02:13:16.118712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62216 ] 00:12:28.431 [2024-05-15 02:13:16.251975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.431 [2024-05-15 02:13:16.347638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.689 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:28.689 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:28.689 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62236 00:12:28.689 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 62236 /var/tmp/spdk2.sock 00:12:28.689 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:28.689 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62236 ']' 00:12:28.689 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:28.689 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:28.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:28.689 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:28.689 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:28.689 02:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:28.689 [2024-05-15 02:13:16.617434] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:28.689 [2024-05-15 02:13:16.617572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62236 ] 00:12:28.947 [2024-05-15 02:13:16.772503] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:28.947 [2024-05-15 02:13:16.772562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.947 [2024-05-15 02:13:16.900762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.885 02:13:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:29.885 02:13:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:29.885 02:13:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62216 00:12:29.885 02:13:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62216 00:12:29.885 02:13:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:30.820 02:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62216 00:12:30.820 02:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62216 ']' 00:12:30.820 02:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 62216 00:12:30.820 02:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:30.820 02:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:30.820 02:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62216 00:12:30.820 02:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:30.820 02:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:30.820 killing process with pid 62216 00:12:30.820 02:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62216' 00:12:30.820 02:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 62216 00:12:30.820 02:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 62216 00:12:31.078 02:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 62236 00:12:31.078 02:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62236 ']' 00:12:31.078 02:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 62236 00:12:31.078 02:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:31.078 02:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:31.078 02:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62236 00:12:31.078 02:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:31.078 02:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:31.078 killing process with pid 62236 00:12:31.078 02:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62236' 00:12:31.078 02:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 62236 00:12:31.337 02:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 62236 00:12:31.595 00:12:31.595 real 0m3.326s 00:12:31.595 user 0m3.908s 00:12:31.595 sys 0m0.946s 00:12:31.595 02:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:31.595 ************************************ 00:12:31.595 END TEST non_locking_app_on_locked_coremask 00:12:31.595 ************************************ 00:12:31.595 02:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:31.595 02:13:19 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:31.595 02:13:19 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:31.595 02:13:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:31.595 02:13:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:31.595 ************************************ 00:12:31.595 START TEST locking_app_on_unlocked_coremask 00:12:31.595 ************************************ 00:12:31.595 02:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:12:31.595 02:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62310 00:12:31.595 02:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:31.595 02:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 62310 /var/tmp/spdk.sock 00:12:31.595 02:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62310 ']' 00:12:31.595 02:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.595 02:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:31.595 02:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.595 02:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:31.595 02:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:31.595 [2024-05-15 02:13:19.509018] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:31.595 [2024-05-15 02:13:19.509157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62310 ] 00:12:31.853 [2024-05-15 02:13:19.657723] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:31.853 [2024-05-15 02:13:19.657802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.853 [2024-05-15 02:13:19.746492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.788 02:13:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:32.788 02:13:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:32.788 02:13:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62338 00:12:32.788 02:13:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:32.788 02:13:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 62338 /var/tmp/spdk2.sock 00:12:32.788 02:13:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62338 ']' 00:12:32.788 02:13:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:32.788 02:13:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:32.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:32.788 02:13:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:32.788 02:13:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:32.788 02:13:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:32.788 [2024-05-15 02:13:20.586268] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:32.788 [2024-05-15 02:13:20.586371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62338 ] 00:12:32.788 [2024-05-15 02:13:20.738563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.046 [2024-05-15 02:13:20.865544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.612 02:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:33.612 02:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:33.612 02:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 62338 00:12:33.612 02:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:33.612 02:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62338 00:12:34.573 02:13:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 62310 00:12:34.573 02:13:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62310 ']' 00:12:34.573 02:13:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 62310 00:12:34.573 02:13:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:34.573 02:13:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:34.573 02:13:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62310 00:12:34.573 02:13:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:34.573 02:13:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:34.573 killing process with pid 62310 00:12:34.573 02:13:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62310' 00:12:34.573 02:13:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 62310 00:12:34.573 02:13:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 62310 00:12:35.141 02:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 62338 00:12:35.141 02:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62338 ']' 00:12:35.141 02:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 62338 00:12:35.141 02:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:35.141 02:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:35.141 02:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62338 00:12:35.141 killing process with pid 62338 00:12:35.141 02:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:35.141 02:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:35.141 02:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62338' 00:12:35.141 02:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 62338 00:12:35.141 02:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 62338 00:12:35.399 ************************************ 00:12:35.399 END TEST locking_app_on_unlocked_coremask 00:12:35.399 ************************************ 00:12:35.399 00:12:35.399 real 0m3.975s 00:12:35.399 user 0m4.753s 00:12:35.399 sys 0m0.946s 00:12:35.399 02:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:35.399 02:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:35.656 02:13:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:35.656 02:13:23 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:35.656 02:13:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:35.656 02:13:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:35.656 ************************************ 00:12:35.656 START TEST locking_app_on_locked_coremask 00:12:35.656 ************************************ 00:12:35.656 02:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:12:35.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.656 02:13:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62417 00:12:35.656 02:13:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 62417 /var/tmp/spdk.sock 00:12:35.656 02:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62417 ']' 00:12:35.656 02:13:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:35.656 02:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.656 02:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:35.656 02:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.656 02:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:35.656 02:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:35.656 [2024-05-15 02:13:23.518230] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:35.656 [2024-05-15 02:13:23.518368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62417 ] 00:12:35.656 [2024-05-15 02:13:23.661288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.914 [2024-05-15 02:13:23.759498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62445 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62445 /var/tmp/spdk2.sock 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62445 /var/tmp/spdk2.sock 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:12:36.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62445 /var/tmp/spdk2.sock 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62445 ']' 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:36.849 02:13:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:36.849 [2024-05-15 02:13:24.590034] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:36.849 [2024-05-15 02:13:24.590164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62445 ] 00:12:36.849 [2024-05-15 02:13:24.738926] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62417 has claimed it. 00:12:36.849 [2024-05-15 02:13:24.739000] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:37.416 ERROR: process (pid: 62445) is no longer running 00:12:37.416 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (62445) - No such process 00:12:37.416 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:37.416 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:12:37.416 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:12:37.416 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:37.416 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:37.416 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:37.416 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 62417 00:12:37.416 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62417 00:12:37.416 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:37.982 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 62417 00:12:37.982 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62417 ']' 00:12:37.982 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 62417 00:12:37.982 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:37.982 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:37.982 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62417 00:12:37.982 killing process with pid 62417 00:12:37.982 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:37.982 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:37.982 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62417' 00:12:37.982 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 62417 00:12:37.982 02:13:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 62417 00:12:38.240 00:12:38.240 real 0m2.777s 00:12:38.240 user 0m3.329s 00:12:38.240 sys 0m0.662s 00:12:38.240 02:13:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:38.240 02:13:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:38.240 ************************************ 00:12:38.240 END TEST locking_app_on_locked_coremask 00:12:38.240 ************************************ 00:12:38.499 02:13:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:38.499 02:13:26 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:38.499 02:13:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:38.499 02:13:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:38.499 ************************************ 00:12:38.499 START TEST locking_overlapped_coremask 00:12:38.499 ************************************ 00:12:38.499 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:12:38.499 02:13:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:38.499 02:13:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62496 00:12:38.499 02:13:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 62496 /var/tmp/spdk.sock 00:12:38.499 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 62496 ']' 00:12:38.499 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.499 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:38.499 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.499 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:38.499 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:38.499 [2024-05-15 02:13:26.334375] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:38.499 [2024-05-15 02:13:26.334522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62496 ] 00:12:38.499 [2024-05-15 02:13:26.477728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:38.759 [2024-05-15 02:13:26.561204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.759 [2024-05-15 02:13:26.561276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.759 [2024-05-15 02:13:26.561288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62513 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62513 /var/tmp/spdk2.sock 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62513 /var/tmp/spdk2.sock 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62513 /var/tmp/spdk2.sock 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 62513 ']' 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:38.759 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:38.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:38.760 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:38.760 02:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:39.018 [2024-05-15 02:13:26.858857] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:39.018 [2024-05-15 02:13:26.858993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62513 ] 00:12:39.276 [2024-05-15 02:13:27.094462] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62496 has claimed it. 00:12:39.276 [2024-05-15 02:13:27.094605] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:39.843 ERROR: process (pid: 62513) is no longer running 00:12:39.843 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (62513) - No such process 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 62496 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 62496 ']' 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 62496 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62496 00:12:39.843 killing process with pid 62496 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62496' 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 62496 00:12:39.843 02:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 62496 00:12:40.101 00:12:40.101 real 0m1.764s 00:12:40.101 user 0m4.810s 00:12:40.101 sys 0m0.382s 00:12:40.101 ************************************ 00:12:40.101 END TEST locking_overlapped_coremask 00:12:40.101 ************************************ 00:12:40.101 02:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:40.101 02:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:40.101 02:13:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:40.101 02:13:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:40.101 02:13:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:40.101 02:13:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:40.101 ************************************ 00:12:40.101 START TEST locking_overlapped_coremask_via_rpc 00:12:40.101 ************************************ 00:12:40.101 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:12:40.101 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62564 00:12:40.101 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:40.101 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 62564 /var/tmp/spdk.sock 00:12:40.101 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62564 ']' 00:12:40.101 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.101 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:40.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.101 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.101 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:40.101 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.358 [2024-05-15 02:13:28.137030] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:40.358 [2024-05-15 02:13:28.137127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62564 ] 00:12:40.358 [2024-05-15 02:13:28.265260] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:40.358 [2024-05-15 02:13:28.265320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:40.358 [2024-05-15 02:13:28.339907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.358 [2024-05-15 02:13:28.339979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.358 [2024-05-15 02:13:28.339986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.616 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:40.616 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:40.616 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62581 00:12:40.616 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 62581 /var/tmp/spdk2.sock 00:12:40.616 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:40.616 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62581 ']' 00:12:40.616 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:40.616 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:40.616 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:40.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:40.616 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:40.616 02:13:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.616 [2024-05-15 02:13:28.613653] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:40.616 [2024-05-15 02:13:28.614175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62581 ] 00:12:40.874 [2024-05-15 02:13:28.762748] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:40.874 [2024-05-15 02:13:28.762836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:41.132 [2024-05-15 02:13:28.908039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.132 [2024-05-15 02:13:28.911487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:41.132 [2024-05-15 02:13:28.911494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.066 [2024-05-15 02:13:29.747619] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62564 has claimed it. 00:12:42.066 2024/05/15 02:13:29 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:12:42.066 request: 00:12:42.066 { 00:12:42.066 "method": "framework_enable_cpumask_locks", 00:12:42.066 "params": {} 00:12:42.066 } 00:12:42.066 Got JSON-RPC error response 00:12:42.066 GoRPCClient: error on JSON-RPC call 00:12:42.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 62564 /var/tmp/spdk.sock 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62564 ']' 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:42.066 02:13:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.324 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:42.324 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:42.324 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 62581 /var/tmp/spdk2.sock 00:12:42.324 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62581 ']' 00:12:42.324 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:42.324 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:42.324 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:42.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:42.324 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:42.324 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.581 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:42.581 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:42.581 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:42.581 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:42.581 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:42.581 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:42.581 00:12:42.581 real 0m2.419s 00:12:42.581 user 0m1.503s 00:12:42.581 sys 0m0.186s 00:12:42.581 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:42.581 02:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.581 ************************************ 00:12:42.581 END TEST locking_overlapped_coremask_via_rpc 00:12:42.581 ************************************ 00:12:42.581 02:13:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:12:42.581 02:13:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62564 ]] 00:12:42.581 02:13:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62564 00:12:42.581 02:13:30 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 62564 ']' 00:12:42.581 02:13:30 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 62564 00:12:42.581 02:13:30 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:12:42.581 02:13:30 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:42.581 02:13:30 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62564 00:12:42.581 killing process with pid 62564 00:12:42.581 02:13:30 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:42.581 02:13:30 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:42.581 02:13:30 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62564' 00:12:42.581 02:13:30 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 62564 00:12:42.581 02:13:30 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 62564 00:12:43.147 02:13:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62581 ]] 00:12:43.147 02:13:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62581 00:12:43.147 02:13:30 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 62581 ']' 00:12:43.147 02:13:30 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 62581 00:12:43.147 02:13:30 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:12:43.147 02:13:30 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:43.147 02:13:30 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62581 00:12:43.147 killing process with pid 62581 00:12:43.147 02:13:30 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:12:43.147 02:13:30 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:12:43.147 02:13:30 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62581' 00:12:43.147 02:13:30 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 62581 00:12:43.147 02:13:30 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 62581 00:12:43.405 02:13:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:43.405 Process with pid 62564 is not found 00:12:43.405 02:13:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:43.405 02:13:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62564 ]] 00:12:43.405 02:13:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62564 00:12:43.405 02:13:31 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 62564 ']' 00:12:43.405 02:13:31 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 62564 00:12:43.405 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (62564) - No such process 00:12:43.405 02:13:31 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 62564 is not found' 00:12:43.405 02:13:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62581 ]] 00:12:43.405 02:13:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62581 00:12:43.405 02:13:31 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 62581 ']' 00:12:43.405 Process with pid 62581 is not found 00:12:43.405 02:13:31 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 62581 00:12:43.405 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (62581) - No such process 00:12:43.405 02:13:31 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 62581 is not found' 00:12:43.405 02:13:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:43.405 ************************************ 00:12:43.405 END TEST cpu_locks 00:12:43.405 ************************************ 00:12:43.405 00:12:43.405 real 0m18.958s 00:12:43.405 user 0m34.548s 00:12:43.405 sys 0m4.743s 00:12:43.405 02:13:31 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:43.405 02:13:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:43.405 ************************************ 00:12:43.405 END TEST event 00:12:43.405 ************************************ 00:12:43.405 00:12:43.405 real 0m45.995s 00:12:43.405 user 1m31.629s 00:12:43.405 sys 0m8.263s 00:12:43.405 02:13:31 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:43.405 02:13:31 event -- common/autotest_common.sh@10 -- # set +x 00:12:43.405 02:13:31 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:43.405 02:13:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:43.405 02:13:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:43.405 02:13:31 -- common/autotest_common.sh@10 -- # set +x 00:12:43.405 ************************************ 00:12:43.405 START TEST thread 00:12:43.405 ************************************ 00:12:43.405 02:13:31 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:43.405 * Looking for test storage... 00:12:43.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:43.663 02:13:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:43.663 02:13:31 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:12:43.663 02:13:31 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:43.663 02:13:31 thread -- common/autotest_common.sh@10 -- # set +x 00:12:43.663 ************************************ 00:12:43.663 START TEST thread_poller_perf 00:12:43.663 ************************************ 00:12:43.663 02:13:31 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:43.663 [2024-05-15 02:13:31.442905] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:43.663 [2024-05-15 02:13:31.443016] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62727 ] 00:12:43.663 [2024-05-15 02:13:31.583562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.663 [2024-05-15 02:13:31.667254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.663 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:45.036 ====================================== 00:12:45.036 busy:2210715703 (cyc) 00:12:45.036 total_run_count: 285000 00:12:45.036 tsc_hz: 2200000000 (cyc) 00:12:45.036 ====================================== 00:12:45.036 poller_cost: 7756 (cyc), 3525 (nsec) 00:12:45.036 00:12:45.036 real 0m1.353s 00:12:45.036 user 0m1.190s 00:12:45.036 sys 0m0.050s 00:12:45.036 02:13:32 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:45.036 02:13:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:45.036 ************************************ 00:12:45.036 END TEST thread_poller_perf 00:12:45.036 ************************************ 00:12:45.036 02:13:32 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:45.036 02:13:32 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:12:45.036 02:13:32 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:45.036 02:13:32 thread -- common/autotest_common.sh@10 -- # set +x 00:12:45.036 ************************************ 00:12:45.036 START TEST thread_poller_perf 00:12:45.036 ************************************ 00:12:45.036 02:13:32 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:45.036 [2024-05-15 02:13:32.837131] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:45.036 [2024-05-15 02:13:32.837241] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62763 ] 00:12:45.036 [2024-05-15 02:13:32.972784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.297 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:45.297 [2024-05-15 02:13:33.057255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.231 ====================================== 00:12:46.231 busy:2205796508 (cyc) 00:12:46.231 total_run_count: 3093000 00:12:46.231 tsc_hz: 2200000000 (cyc) 00:12:46.231 ====================================== 00:12:46.231 poller_cost: 713 (cyc), 324 (nsec) 00:12:46.231 00:12:46.231 real 0m1.344s 00:12:46.231 user 0m1.197s 00:12:46.231 sys 0m0.038s 00:12:46.231 02:13:34 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:46.231 02:13:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:46.231 ************************************ 00:12:46.231 END TEST thread_poller_perf 00:12:46.231 ************************************ 00:12:46.231 02:13:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:46.231 ************************************ 00:12:46.231 END TEST thread 00:12:46.231 ************************************ 00:12:46.231 00:12:46.231 real 0m2.835s 00:12:46.231 user 0m2.436s 00:12:46.231 sys 0m0.173s 00:12:46.231 02:13:34 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:46.231 02:13:34 thread -- common/autotest_common.sh@10 -- # set +x 00:12:46.231 02:13:34 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:46.231 02:13:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:46.231 02:13:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:46.231 02:13:34 -- common/autotest_common.sh@10 -- # set +x 00:12:46.231 ************************************ 00:12:46.231 START TEST accel 00:12:46.231 ************************************ 00:12:46.231 02:13:34 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:46.489 * Looking for test storage... 00:12:46.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:46.489 02:13:34 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:12:46.489 02:13:34 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:12:46.489 02:13:34 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:46.489 02:13:34 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=62837 00:12:46.489 02:13:34 accel -- accel/accel.sh@63 -- # waitforlisten 62837 00:12:46.489 02:13:34 accel -- common/autotest_common.sh@827 -- # '[' -z 62837 ']' 00:12:46.489 02:13:34 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:12:46.489 02:13:34 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.489 02:13:34 accel -- accel/accel.sh@61 -- # build_accel_config 00:12:46.489 02:13:34 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:46.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.489 02:13:34 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.489 02:13:34 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:46.489 02:13:34 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:46.489 02:13:34 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:46.489 02:13:34 accel -- common/autotest_common.sh@10 -- # set +x 00:12:46.489 02:13:34 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:46.489 02:13:34 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:46.489 02:13:34 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:46.489 02:13:34 accel -- accel/accel.sh@40 -- # local IFS=, 00:12:46.489 02:13:34 accel -- accel/accel.sh@41 -- # jq -r . 00:12:46.489 [2024-05-15 02:13:34.378820] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:46.489 [2024-05-15 02:13:34.379201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62837 ] 00:12:46.748 [2024-05-15 02:13:34.548273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.748 [2024-05-15 02:13:34.633300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.683 02:13:35 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:47.683 02:13:35 accel -- common/autotest_common.sh@860 -- # return 0 00:12:47.683 02:13:35 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:12:47.683 02:13:35 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:12:47.683 02:13:35 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:12:47.683 02:13:35 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:12:47.684 02:13:35 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:12:47.684 02:13:35 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:12:47.684 02:13:35 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.684 02:13:35 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:12:47.684 02:13:35 accel -- common/autotest_common.sh@10 -- # set +x 00:12:47.684 02:13:35 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.684 02:13:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:47.684 02:13:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:47.684 02:13:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:47.684 02:13:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:47.684 02:13:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:47.684 02:13:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:47.684 02:13:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:47.684 02:13:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:47.684 02:13:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:47.684 02:13:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:47.684 02:13:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:47.684 02:13:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:47.684 02:13:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:47.684 02:13:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:47.684 02:13:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:47.684 02:13:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:47.684 02:13:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:47.684 02:13:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:47.684 02:13:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:47.684 02:13:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:47.684 02:13:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:47.684 02:13:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:47.684 02:13:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:47.684 02:13:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:47.684 02:13:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:47.684 02:13:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:47.684 02:13:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:47.684 02:13:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:47.684 02:13:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:47.684 02:13:35 accel -- accel/accel.sh@75 -- # killprocess 62837 00:12:47.684 02:13:35 accel -- common/autotest_common.sh@946 -- # '[' -z 62837 ']' 00:12:47.684 02:13:35 accel -- common/autotest_common.sh@950 -- # kill -0 62837 00:12:47.684 02:13:35 accel -- common/autotest_common.sh@951 -- # uname 00:12:47.684 02:13:35 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:47.684 02:13:35 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62837 00:12:47.684 02:13:35 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:47.684 02:13:35 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:47.684 02:13:35 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62837' 00:12:47.684 killing process with pid 62837 00:12:47.684 02:13:35 accel -- common/autotest_common.sh@965 -- # kill 62837 00:12:47.684 02:13:35 accel -- common/autotest_common.sh@970 -- # wait 62837 00:12:47.943 02:13:35 accel -- accel/accel.sh@76 -- # trap - ERR 00:12:47.943 02:13:35 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:12:47.943 02:13:35 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:47.943 02:13:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:47.943 02:13:35 accel -- common/autotest_common.sh@10 -- # set +x 00:12:47.943 02:13:35 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:12:47.943 02:13:35 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:12:47.943 02:13:35 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:12:47.943 02:13:35 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:47.943 02:13:35 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:47.943 02:13:35 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:47.943 02:13:35 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:47.943 02:13:35 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:47.943 02:13:35 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:12:47.943 02:13:35 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:12:47.943 02:13:35 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:47.943 02:13:35 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:12:47.943 02:13:35 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:12:47.943 02:13:35 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:47.943 02:13:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:47.943 02:13:35 accel -- common/autotest_common.sh@10 -- # set +x 00:12:47.943 ************************************ 00:12:47.943 START TEST accel_missing_filename 00:12:47.943 ************************************ 00:12:47.943 02:13:35 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:12:47.943 02:13:35 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:12:47.943 02:13:35 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:12:47.943 02:13:35 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:47.943 02:13:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.943 02:13:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:47.943 02:13:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.943 02:13:35 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:12:47.943 02:13:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:12:47.943 02:13:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:12:47.943 02:13:35 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:47.943 02:13:35 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:47.943 02:13:35 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:47.943 02:13:35 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:47.943 02:13:35 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:47.943 02:13:35 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:12:47.943 02:13:35 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:12:47.943 [2024-05-15 02:13:35.951294] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:47.943 [2024-05-15 02:13:35.951422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62907 ] 00:12:48.202 [2024-05-15 02:13:36.088249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.202 [2024-05-15 02:13:36.150521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.202 [2024-05-15 02:13:36.187773] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:48.460 [2024-05-15 02:13:36.237766] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:12:48.460 A filename is required. 00:12:48.460 02:13:36 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:12:48.460 02:13:36 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:48.460 02:13:36 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:12:48.460 02:13:36 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:12:48.460 02:13:36 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:12:48.460 ************************************ 00:12:48.460 END TEST accel_missing_filename 00:12:48.460 ************************************ 00:12:48.460 02:13:36 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:48.460 00:12:48.460 real 0m0.440s 00:12:48.460 user 0m0.309s 00:12:48.460 sys 0m0.078s 00:12:48.460 02:13:36 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:48.460 02:13:36 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:12:48.460 02:13:36 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:48.460 02:13:36 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:12:48.460 02:13:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:48.460 02:13:36 accel -- common/autotest_common.sh@10 -- # set +x 00:12:48.460 ************************************ 00:12:48.460 START TEST accel_compress_verify 00:12:48.460 ************************************ 00:12:48.460 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:48.460 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:12:48.460 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:48.460 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:48.460 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:48.460 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:48.460 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:48.460 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:48.460 02:13:36 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:48.460 02:13:36 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:48.460 02:13:36 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:48.460 02:13:36 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:48.460 02:13:36 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:48.460 02:13:36 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:48.460 02:13:36 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:48.460 02:13:36 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:48.460 02:13:36 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:12:48.460 [2024-05-15 02:13:36.427825] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:48.460 [2024-05-15 02:13:36.427965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62931 ] 00:12:48.719 [2024-05-15 02:13:36.562380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.719 [2024-05-15 02:13:36.623367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.719 [2024-05-15 02:13:36.656484] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:48.719 [2024-05-15 02:13:36.699078] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:12:48.977 00:12:48.977 Compression does not support the verify option, aborting. 00:12:48.977 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:12:48.977 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:48.977 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:12:48.977 ************************************ 00:12:48.977 END TEST accel_compress_verify 00:12:48.977 ************************************ 00:12:48.977 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:12:48.977 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:12:48.977 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:48.977 00:12:48.977 real 0m0.395s 00:12:48.977 user 0m0.272s 00:12:48.977 sys 0m0.072s 00:12:48.977 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:48.977 02:13:36 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:12:48.977 02:13:36 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:12:48.977 02:13:36 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:48.977 02:13:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:48.977 02:13:36 accel -- common/autotest_common.sh@10 -- # set +x 00:12:48.977 ************************************ 00:12:48.977 START TEST accel_wrong_workload 00:12:48.977 ************************************ 00:12:48.977 02:13:36 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:12:48.977 02:13:36 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:12:48.977 02:13:36 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:12:48.977 02:13:36 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:48.977 02:13:36 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:48.977 02:13:36 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:48.977 02:13:36 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:48.977 02:13:36 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:12:48.977 02:13:36 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:12:48.977 02:13:36 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:12:48.977 02:13:36 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:48.977 02:13:36 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:48.977 02:13:36 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:48.977 02:13:36 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:48.977 02:13:36 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:48.977 02:13:36 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:12:48.977 02:13:36 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:12:48.977 Unsupported workload type: foobar 00:12:48.977 [2024-05-15 02:13:36.856638] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:12:48.977 accel_perf options: 00:12:48.977 [-h help message] 00:12:48.977 [-q queue depth per core] 00:12:48.977 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:48.977 [-T number of threads per core 00:12:48.977 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:48.977 [-t time in seconds] 00:12:48.977 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:48.977 [ dif_verify, , dif_generate, dif_generate_copy 00:12:48.977 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:48.977 [-l for compress/decompress workloads, name of uncompressed input file 00:12:48.977 [-S for crc32c workload, use this seed value (default 0) 00:12:48.977 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:48.977 [-f for fill workload, use this BYTE value (default 255) 00:12:48.977 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:48.977 [-y verify result if this switch is on] 00:12:48.977 [-a tasks to allocate per core (default: same value as -q)] 00:12:48.977 Can be used to spread operations across a wider range of memory. 00:12:48.977 ************************************ 00:12:48.977 END TEST accel_wrong_workload 00:12:48.977 ************************************ 00:12:48.977 02:13:36 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:12:48.977 02:13:36 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:48.977 02:13:36 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:48.977 02:13:36 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:48.977 00:12:48.977 real 0m0.028s 00:12:48.977 user 0m0.016s 00:12:48.977 sys 0m0.012s 00:12:48.977 02:13:36 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:48.977 02:13:36 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:12:48.977 02:13:36 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:12:48.977 02:13:36 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:12:48.977 02:13:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:48.977 02:13:36 accel -- common/autotest_common.sh@10 -- # set +x 00:12:48.977 ************************************ 00:12:48.977 START TEST accel_negative_buffers 00:12:48.977 ************************************ 00:12:48.977 02:13:36 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:12:48.977 02:13:36 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:12:48.977 02:13:36 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:12:48.977 02:13:36 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:48.977 02:13:36 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:48.977 02:13:36 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:48.977 02:13:36 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:48.977 02:13:36 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:12:48.977 02:13:36 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:12:48.977 02:13:36 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:12:48.977 02:13:36 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:48.977 02:13:36 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:48.977 02:13:36 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:48.977 02:13:36 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:48.977 02:13:36 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:48.977 02:13:36 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:12:48.977 02:13:36 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:12:48.977 -x option must be non-negative. 00:12:48.977 [2024-05-15 02:13:36.922949] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:12:48.977 accel_perf options: 00:12:48.977 [-h help message] 00:12:48.977 [-q queue depth per core] 00:12:48.977 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:48.977 [-T number of threads per core 00:12:48.977 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:48.977 [-t time in seconds] 00:12:48.977 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:48.977 [ dif_verify, , dif_generate, dif_generate_copy 00:12:48.977 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:48.977 [-l for compress/decompress workloads, name of uncompressed input file 00:12:48.977 [-S for crc32c workload, use this seed value (default 0) 00:12:48.977 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:48.977 [-f for fill workload, use this BYTE value (default 255) 00:12:48.977 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:48.977 [-y verify result if this switch is on] 00:12:48.977 [-a tasks to allocate per core (default: same value as -q)] 00:12:48.977 Can be used to spread operations across a wider range of memory. 00:12:48.977 ************************************ 00:12:48.977 END TEST accel_negative_buffers 00:12:48.977 ************************************ 00:12:48.977 02:13:36 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:12:48.977 02:13:36 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:48.977 02:13:36 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:48.977 02:13:36 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:48.977 00:12:48.978 real 0m0.031s 00:12:48.978 user 0m0.017s 00:12:48.978 sys 0m0.014s 00:12:48.978 02:13:36 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:48.978 02:13:36 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:12:48.978 02:13:36 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:12:48.978 02:13:36 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:12:48.978 02:13:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:48.978 02:13:36 accel -- common/autotest_common.sh@10 -- # set +x 00:12:48.978 ************************************ 00:12:48.978 START TEST accel_crc32c 00:12:48.978 ************************************ 00:12:48.978 02:13:36 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:12:48.978 02:13:36 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:48.978 02:13:36 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:48.978 02:13:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:48.978 02:13:36 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:12:48.978 02:13:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:48.978 02:13:36 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:12:48.978 02:13:36 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:48.978 02:13:36 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:48.978 02:13:36 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:48.978 02:13:36 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:48.978 02:13:36 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:48.978 02:13:36 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:48.978 02:13:36 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:48.978 02:13:36 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:49.236 [2024-05-15 02:13:36.989742] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:49.236 [2024-05-15 02:13:36.989859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62990 ] 00:12:49.236 [2024-05-15 02:13:37.128259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.236 [2024-05-15 02:13:37.210406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:49.494 02:13:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:50.428 02:13:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:50.428 00:12:50.428 real 0m1.450s 00:12:50.428 user 0m1.260s 00:12:50.428 sys 0m0.086s 00:12:50.428 02:13:38 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:50.428 02:13:38 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:50.428 ************************************ 00:12:50.428 END TEST accel_crc32c 00:12:50.428 ************************************ 00:12:50.686 02:13:38 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:12:50.686 02:13:38 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:12:50.686 02:13:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:50.686 02:13:38 accel -- common/autotest_common.sh@10 -- # set +x 00:12:50.686 ************************************ 00:12:50.686 START TEST accel_crc32c_C2 00:12:50.686 ************************************ 00:12:50.686 02:13:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:12:50.686 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:50.686 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:50.686 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.686 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.686 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:12:50.686 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:12:50.686 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:50.686 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:50.686 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:50.686 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:50.686 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:50.686 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:50.686 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:50.686 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:50.686 [2024-05-15 02:13:38.475814] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:50.686 [2024-05-15 02:13:38.475929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63024 ] 00:12:50.686 [2024-05-15 02:13:38.614116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.686 [2024-05-15 02:13:38.697428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.945 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:50.945 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.945 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:50.946 02:13:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:51.879 00:12:51.879 real 0m1.428s 00:12:51.879 user 0m1.237s 00:12:51.879 sys 0m0.088s 00:12:51.879 ************************************ 00:12:51.879 END TEST accel_crc32c_C2 00:12:51.879 ************************************ 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:51.879 02:13:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:52.137 02:13:39 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:52.137 02:13:39 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:52.137 02:13:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:52.137 02:13:39 accel -- common/autotest_common.sh@10 -- # set +x 00:12:52.137 ************************************ 00:12:52.137 START TEST accel_copy 00:12:52.137 ************************************ 00:12:52.137 02:13:39 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:12:52.137 02:13:39 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:52.137 02:13:39 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:12:52.137 02:13:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.137 02:13:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.137 02:13:39 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:52.137 02:13:39 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:52.137 02:13:39 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:52.137 02:13:39 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:52.137 02:13:39 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:52.137 02:13:39 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:52.137 02:13:39 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:52.137 02:13:39 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:52.137 02:13:39 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:52.137 02:13:39 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:12:52.137 [2024-05-15 02:13:39.937785] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:52.137 [2024-05-15 02:13:39.937881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63059 ] 00:12:52.137 [2024-05-15 02:13:40.069834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.396 [2024-05-15 02:13:40.156289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:52.396 02:13:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:53.340 02:13:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:53.340 00:12:53.340 real 0m1.430s 00:12:53.340 user 0m1.239s 00:12:53.340 sys 0m0.090s 00:12:53.340 02:13:41 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:53.340 ************************************ 00:12:53.340 END TEST accel_copy 00:12:53.340 ************************************ 00:12:53.340 02:13:41 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:12:53.599 02:13:41 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:53.599 02:13:41 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:12:53.599 02:13:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:53.599 02:13:41 accel -- common/autotest_common.sh@10 -- # set +x 00:12:53.599 ************************************ 00:12:53.599 START TEST accel_fill 00:12:53.599 ************************************ 00:12:53.599 02:13:41 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:53.599 02:13:41 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:12:53.599 02:13:41 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:12:53.599 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.599 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.599 02:13:41 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:53.599 02:13:41 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:53.599 02:13:41 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:12:53.599 02:13:41 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:53.599 02:13:41 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:53.599 02:13:41 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:53.599 02:13:41 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:53.599 02:13:41 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:53.599 02:13:41 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:12:53.599 02:13:41 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:12:53.599 [2024-05-15 02:13:41.414051] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:53.599 [2024-05-15 02:13:41.414198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63088 ] 00:12:53.599 [2024-05-15 02:13:41.566546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.857 [2024-05-15 02:13:41.655537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.857 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:53.858 02:13:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:55.243 ************************************ 00:12:55.243 END TEST accel_fill 00:12:55.243 ************************************ 00:12:55.243 02:13:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:55.243 00:12:55.243 real 0m1.477s 00:12:55.243 user 0m1.271s 00:12:55.243 sys 0m0.104s 00:12:55.243 02:13:42 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:55.243 02:13:42 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:12:55.243 02:13:42 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:55.243 02:13:42 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:55.243 02:13:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:55.243 02:13:42 accel -- common/autotest_common.sh@10 -- # set +x 00:12:55.243 ************************************ 00:12:55.243 START TEST accel_copy_crc32c 00:12:55.243 ************************************ 00:12:55.243 02:13:42 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:12:55.243 02:13:42 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:55.243 02:13:42 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:55.243 02:13:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.243 02:13:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.243 02:13:42 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:55.243 02:13:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:55.243 02:13:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:55.243 02:13:42 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:55.243 02:13:42 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:55.243 02:13:42 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:55.243 02:13:42 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:55.243 02:13:42 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:55.243 02:13:42 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:55.243 02:13:42 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:55.243 [2024-05-15 02:13:42.924810] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:55.244 [2024-05-15 02:13:42.924926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63128 ] 00:12:55.244 [2024-05-15 02:13:43.070270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.244 [2024-05-15 02:13:43.158954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:55.244 02:13:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:56.615 ************************************ 00:12:56.615 END TEST accel_copy_crc32c 00:12:56.615 ************************************ 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:56.615 00:12:56.615 real 0m1.456s 00:12:56.615 user 0m1.260s 00:12:56.615 sys 0m0.097s 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:56.615 02:13:44 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:56.615 02:13:44 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:56.615 02:13:44 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:12:56.615 02:13:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:56.615 02:13:44 accel -- common/autotest_common.sh@10 -- # set +x 00:12:56.615 ************************************ 00:12:56.615 START TEST accel_copy_crc32c_C2 00:12:56.615 ************************************ 00:12:56.615 02:13:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:56.615 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:56.615 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:56.615 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.615 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:56.615 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.615 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:56.615 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:56.615 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:56.615 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:56.615 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:56.615 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:56.615 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:56.615 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:56.615 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:56.615 [2024-05-15 02:13:44.422171] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:56.615 [2024-05-15 02:13:44.422298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63157 ] 00:12:56.615 [2024-05-15 02:13:44.560909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.615 [2024-05-15 02:13:44.623094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:56.881 02:13:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:57.841 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:57.841 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.841 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:57.841 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:57.841 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:57.842 00:12:57.842 real 0m1.407s 00:12:57.842 user 0m1.238s 00:12:57.842 sys 0m0.069s 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:57.842 02:13:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:57.842 ************************************ 00:12:57.842 END TEST accel_copy_crc32c_C2 00:12:57.842 ************************************ 00:12:57.842 02:13:45 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:57.842 02:13:45 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:57.842 02:13:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:57.842 02:13:45 accel -- common/autotest_common.sh@10 -- # set +x 00:12:57.842 ************************************ 00:12:57.842 START TEST accel_dualcast 00:12:57.842 ************************************ 00:12:57.842 02:13:45 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:12:57.842 02:13:45 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:12:57.842 02:13:45 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:12:57.842 02:13:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:57.842 02:13:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:57.842 02:13:45 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:57.842 02:13:45 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:57.842 02:13:45 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:12:57.842 02:13:45 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:57.842 02:13:45 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:57.842 02:13:45 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:57.842 02:13:45 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:57.842 02:13:45 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:57.842 02:13:45 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:12:57.842 02:13:45 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:12:58.100 [2024-05-15 02:13:45.871224] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:58.100 [2024-05-15 02:13:45.871344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63197 ] 00:12:58.100 [2024-05-15 02:13:46.009870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.100 [2024-05-15 02:13:46.078454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.100 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:58.358 02:13:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:59.290 ************************************ 00:12:59.290 END TEST accel_dualcast 00:12:59.290 ************************************ 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:59.290 02:13:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:59.290 00:12:59.290 real 0m1.401s 00:12:59.290 user 0m1.232s 00:12:59.290 sys 0m0.075s 00:12:59.290 02:13:47 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:59.290 02:13:47 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:12:59.290 02:13:47 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:59.290 02:13:47 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:59.290 02:13:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:59.290 02:13:47 accel -- common/autotest_common.sh@10 -- # set +x 00:12:59.290 ************************************ 00:12:59.290 START TEST accel_compare 00:12:59.290 ************************************ 00:12:59.290 02:13:47 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:12:59.290 02:13:47 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:12:59.290 02:13:47 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:12:59.290 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.290 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.290 02:13:47 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:59.290 02:13:47 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:59.290 02:13:47 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:12:59.290 02:13:47 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:59.290 02:13:47 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:59.290 02:13:47 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:59.290 02:13:47 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:59.290 02:13:47 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:59.290 02:13:47 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:12:59.290 02:13:47 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:12:59.547 [2024-05-15 02:13:47.308129] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:59.548 [2024-05-15 02:13:47.308245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63226 ] 00:12:59.548 [2024-05-15 02:13:47.447466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.548 [2024-05-15 02:13:47.533627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:59.806 02:13:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:00.740 ************************************ 00:13:00.740 END TEST accel_compare 00:13:00.740 ************************************ 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:13:00.740 02:13:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:00.740 00:13:00.740 real 0m1.421s 00:13:00.740 user 0m1.240s 00:13:00.740 sys 0m0.081s 00:13:00.740 02:13:48 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:00.740 02:13:48 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:13:00.740 02:13:48 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:13:00.740 02:13:48 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:00.740 02:13:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:00.740 02:13:48 accel -- common/autotest_common.sh@10 -- # set +x 00:13:00.740 ************************************ 00:13:00.740 START TEST accel_xor 00:13:00.740 ************************************ 00:13:00.740 02:13:48 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:13:00.740 02:13:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:13:00.740 02:13:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:13:00.740 02:13:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:00.740 02:13:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:13:00.740 02:13:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:00.740 02:13:48 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:13:00.741 02:13:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:13:00.741 02:13:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:00.741 02:13:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:00.741 02:13:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:00.741 02:13:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:00.741 02:13:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:00.741 02:13:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:13:00.741 02:13:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:13:00.998 [2024-05-15 02:13:48.764953] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:00.998 [2024-05-15 02:13:48.765033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63261 ] 00:13:00.998 [2024-05-15 02:13:48.896163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.998 [2024-05-15 02:13:48.983817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:01.257 02:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:13:02.191 02:13:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:02.191 00:13:02.191 real 0m1.423s 00:13:02.191 user 0m1.250s 00:13:02.192 sys 0m0.078s 00:13:02.192 ************************************ 00:13:02.192 END TEST accel_xor 00:13:02.192 ************************************ 00:13:02.192 02:13:50 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:02.192 02:13:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:13:02.192 02:13:50 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:13:02.192 02:13:50 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:02.192 02:13:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:02.192 02:13:50 accel -- common/autotest_common.sh@10 -- # set +x 00:13:02.450 ************************************ 00:13:02.450 START TEST accel_xor 00:13:02.450 ************************************ 00:13:02.450 02:13:50 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:13:02.450 02:13:50 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:13:02.450 02:13:50 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:13:02.450 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.450 02:13:50 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:13:02.450 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.450 02:13:50 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:13:02.450 02:13:50 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:13:02.450 02:13:50 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:02.450 02:13:50 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:02.450 02:13:50 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:02.450 02:13:50 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:02.450 02:13:50 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:02.450 02:13:50 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:13:02.450 02:13:50 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:13:02.450 [2024-05-15 02:13:50.232765] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:02.450 [2024-05-15 02:13:50.232883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63295 ] 00:13:02.450 [2024-05-15 02:13:50.371888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.450 [2024-05-15 02:13:50.434543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:02.711 02:13:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:03.643 02:13:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:03.643 02:13:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:03.643 02:13:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:03.643 02:13:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:03.643 02:13:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:03.643 02:13:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:03.643 02:13:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:03.643 02:13:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:03.643 02:13:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:03.643 02:13:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:03.643 02:13:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:03.643 02:13:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:03.644 02:13:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:03.644 02:13:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:03.644 02:13:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:03.644 02:13:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:03.644 02:13:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:03.644 02:13:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:03.644 02:13:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:03.644 02:13:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:03.644 02:13:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:03.644 02:13:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:03.644 02:13:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:03.644 02:13:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:03.644 02:13:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:03.644 02:13:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:13:03.644 02:13:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:03.644 00:13:03.644 real 0m1.404s 00:13:03.644 user 0m1.241s 00:13:03.644 sys 0m0.070s 00:13:03.644 02:13:51 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:03.644 02:13:51 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:13:03.644 ************************************ 00:13:03.644 END TEST accel_xor 00:13:03.644 ************************************ 00:13:03.644 02:13:51 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:13:03.644 02:13:51 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:13:03.644 02:13:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:03.644 02:13:51 accel -- common/autotest_common.sh@10 -- # set +x 00:13:03.644 ************************************ 00:13:03.644 START TEST accel_dif_verify 00:13:03.644 ************************************ 00:13:03.644 02:13:51 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:13:03.644 02:13:51 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:13:03.644 02:13:51 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:13:03.644 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:03.644 02:13:51 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:13:03.644 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:03.644 02:13:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:13:03.644 02:13:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:13:03.644 02:13:51 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:03.644 02:13:51 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:03.644 02:13:51 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:03.644 02:13:51 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:03.644 02:13:51 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:03.644 02:13:51 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:13:03.644 02:13:51 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:13:03.902 [2024-05-15 02:13:51.675467] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:03.902 [2024-05-15 02:13:51.675587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63324 ] 00:13:03.902 [2024-05-15 02:13:51.815756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.902 [2024-05-15 02:13:51.874754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:03.902 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:04.160 02:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:05.095 ************************************ 00:13:05.095 END TEST accel_dif_verify 00:13:05.095 ************************************ 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:13:05.095 02:13:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:05.095 00:13:05.095 real 0m1.398s 00:13:05.095 user 0m1.228s 00:13:05.095 sys 0m0.076s 00:13:05.095 02:13:53 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:05.095 02:13:53 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:13:05.095 02:13:53 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:13:05.095 02:13:53 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:13:05.095 02:13:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:05.095 02:13:53 accel -- common/autotest_common.sh@10 -- # set +x 00:13:05.095 ************************************ 00:13:05.095 START TEST accel_dif_generate 00:13:05.095 ************************************ 00:13:05.095 02:13:53 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:13:05.095 02:13:53 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:13:05.095 02:13:53 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:13:05.095 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.095 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.095 02:13:53 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:13:05.095 02:13:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:13:05.095 02:13:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:13:05.095 02:13:53 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:05.095 02:13:53 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:05.095 02:13:53 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:05.095 02:13:53 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:05.095 02:13:53 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:05.095 02:13:53 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:13:05.095 02:13:53 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:13:05.354 [2024-05-15 02:13:53.112674] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:05.354 [2024-05-15 02:13:53.112806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63363 ] 00:13:05.354 [2024-05-15 02:13:53.248254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.354 [2024-05-15 02:13:53.308275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.354 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:13:05.355 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.355 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.355 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.355 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:13:05.355 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.355 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.355 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.355 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:05.355 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.355 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.355 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:05.355 02:13:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:05.355 02:13:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:05.355 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:05.355 02:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:13:06.726 02:13:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:06.726 00:13:06.726 real 0m1.393s 00:13:06.726 user 0m1.218s 00:13:06.726 sys 0m0.076s 00:13:06.726 02:13:54 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:06.726 02:13:54 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:13:06.726 ************************************ 00:13:06.726 END TEST accel_dif_generate 00:13:06.726 ************************************ 00:13:06.726 02:13:54 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:13:06.726 02:13:54 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:13:06.726 02:13:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:06.726 02:13:54 accel -- common/autotest_common.sh@10 -- # set +x 00:13:06.726 ************************************ 00:13:06.726 START TEST accel_dif_generate_copy 00:13:06.726 ************************************ 00:13:06.726 02:13:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:13:06.726 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:13:06.726 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:13:06.726 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.726 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.726 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:13:06.726 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:13:06.726 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:13:06.726 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:06.726 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:06.726 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:06.726 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:06.726 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:06.726 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:13:06.726 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:13:06.726 [2024-05-15 02:13:54.541936] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:06.726 [2024-05-15 02:13:54.542056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63393 ] 00:13:06.726 [2024-05-15 02:13:54.678684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.985 [2024-05-15 02:13:54.763092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.985 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.986 02:13:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:08.359 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:08.359 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:08.359 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:08.359 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:08.359 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:08.359 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:08.359 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:08.359 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:08.359 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:08.359 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:08.359 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:08.360 00:13:08.360 real 0m1.430s 00:13:08.360 user 0m1.239s 00:13:08.360 sys 0m0.089s 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:08.360 02:13:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:13:08.360 ************************************ 00:13:08.360 END TEST accel_dif_generate_copy 00:13:08.360 ************************************ 00:13:08.360 02:13:55 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:13:08.360 02:13:55 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:08.360 02:13:55 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:13:08.360 02:13:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:08.360 02:13:55 accel -- common/autotest_common.sh@10 -- # set +x 00:13:08.360 ************************************ 00:13:08.360 START TEST accel_comp 00:13:08.360 ************************************ 00:13:08.360 02:13:55 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:08.360 02:13:55 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:13:08.360 02:13:55 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:13:08.360 02:13:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:55 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:08.360 02:13:55 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:08.360 02:13:55 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:13:08.360 02:13:55 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:08.360 02:13:55 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:08.360 02:13:55 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:08.360 02:13:55 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:08.360 02:13:55 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:08.360 02:13:55 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:13:08.360 02:13:55 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:13:08.360 [2024-05-15 02:13:56.011996] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:08.360 [2024-05-15 02:13:56.012093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63428 ] 00:13:08.360 [2024-05-15 02:13:56.142286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.360 [2024-05-15 02:13:56.207617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:08.360 02:13:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:09.735 02:13:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:09.735 02:13:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.735 02:13:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:09.735 02:13:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:09.735 02:13:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:09.735 02:13:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.735 02:13:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:13:09.736 ************************************ 00:13:09.736 END TEST accel_comp 00:13:09.736 ************************************ 00:13:09.736 02:13:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:09.736 00:13:09.736 real 0m1.393s 00:13:09.736 user 0m1.221s 00:13:09.736 sys 0m0.075s 00:13:09.736 02:13:57 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:09.736 02:13:57 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:13:09.736 02:13:57 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:09.736 02:13:57 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:09.736 02:13:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:09.736 02:13:57 accel -- common/autotest_common.sh@10 -- # set +x 00:13:09.736 ************************************ 00:13:09.736 START TEST accel_decomp 00:13:09.736 ************************************ 00:13:09.736 02:13:57 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:13:09.736 [2024-05-15 02:13:57.443339] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:09.736 [2024-05-15 02:13:57.444033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63464 ] 00:13:09.736 [2024-05-15 02:13:57.576584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.736 [2024-05-15 02:13:57.662952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:09.736 02:13:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:11.122 02:13:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:11.122 00:13:11.122 real 0m1.425s 00:13:11.122 user 0m1.243s 00:13:11.122 sys 0m0.085s 00:13:11.122 02:13:58 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:11.122 02:13:58 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:13:11.122 ************************************ 00:13:11.122 END TEST accel_decomp 00:13:11.122 ************************************ 00:13:11.122 02:13:58 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:11.122 02:13:58 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:11.123 02:13:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:11.123 02:13:58 accel -- common/autotest_common.sh@10 -- # set +x 00:13:11.123 ************************************ 00:13:11.123 START TEST accel_decmop_full 00:13:11.123 ************************************ 00:13:11.123 02:13:58 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:11.123 02:13:58 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:13:11.123 02:13:58 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:13:11.123 02:13:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.123 02:13:58 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:11.123 02:13:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.123 02:13:58 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:11.123 02:13:58 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:13:11.123 02:13:58 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:11.123 02:13:58 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:11.123 02:13:58 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:11.123 02:13:58 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:11.123 02:13:58 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:11.123 02:13:58 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:13:11.123 02:13:58 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:13:11.123 [2024-05-15 02:13:58.914827] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:11.123 [2024-05-15 02:13:58.914947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63495 ] 00:13:11.123 [2024-05-15 02:13:59.054493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.123 [2024-05-15 02:13:59.130319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:11.386 02:13:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:12.320 02:14:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:12.320 02:14:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:12.320 02:14:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:12.320 02:14:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:12.320 02:14:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:12.320 02:14:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:12.320 02:14:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:12.320 02:14:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:12.320 02:14:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:12.320 02:14:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:12.320 02:14:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:12.320 02:14:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:12.320 02:14:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:12.321 02:14:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:12.321 02:14:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:12.321 02:14:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:12.321 02:14:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:12.321 02:14:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:12.321 02:14:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:12.321 02:14:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:12.321 02:14:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:12.321 ************************************ 00:13:12.321 END TEST accel_decmop_full 00:13:12.321 ************************************ 00:13:12.321 02:14:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:12.321 02:14:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:12.321 02:14:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:12.321 02:14:00 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:12.321 02:14:00 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:12.321 02:14:00 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:12.321 00:13:12.321 real 0m1.435s 00:13:12.321 user 0m1.256s 00:13:12.321 sys 0m0.082s 00:13:12.321 02:14:00 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:12.321 02:14:00 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:13:12.579 02:14:00 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:12.579 02:14:00 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:12.579 02:14:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:12.579 02:14:00 accel -- common/autotest_common.sh@10 -- # set +x 00:13:12.579 ************************************ 00:13:12.579 START TEST accel_decomp_mcore 00:13:12.579 ************************************ 00:13:12.579 02:14:00 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:12.579 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:12.579 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:12.579 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.579 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:12.579 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.579 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:12.579 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:12.579 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:12.579 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:12.579 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:12.579 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:12.579 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:12.579 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:12.579 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:12.579 [2024-05-15 02:14:00.383742] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:12.579 [2024-05-15 02:14:00.384568] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63518 ] 00:13:12.579 [2024-05-15 02:14:00.519906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.838 [2024-05-15 02:14:00.612560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.838 [2024-05-15 02:14:00.612658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.838 [2024-05-15 02:14:00.612781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.838 [2024-05-15 02:14:00.612784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.838 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:12.838 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.838 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.838 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.838 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:12.838 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.838 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.838 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.838 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:12.838 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.838 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.838 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:12.839 02:14:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:14.213 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:14.214 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:14.214 02:14:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:14.214 00:13:14.214 real 0m1.468s 00:13:14.214 user 0m4.514s 00:13:14.214 sys 0m0.109s 00:13:14.214 02:14:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:14.214 02:14:01 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:14.214 ************************************ 00:13:14.214 END TEST accel_decomp_mcore 00:13:14.214 ************************************ 00:13:14.214 02:14:01 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:14.214 02:14:01 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:14.214 02:14:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:14.214 02:14:01 accel -- common/autotest_common.sh@10 -- # set +x 00:13:14.214 ************************************ 00:13:14.214 START TEST accel_decomp_full_mcore 00:13:14.214 ************************************ 00:13:14.214 02:14:01 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:14.214 02:14:01 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:14.214 02:14:01 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:14.214 02:14:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:01 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:14.214 02:14:01 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:14.214 02:14:01 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:14.214 02:14:01 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:14.214 02:14:01 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:14.214 02:14:01 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:14.214 02:14:01 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:14.214 02:14:01 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:14.214 02:14:01 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:14.214 02:14:01 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:14.214 [2024-05-15 02:14:01.888478] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:14.214 [2024-05-15 02:14:01.888601] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63549 ] 00:13:14.214 [2024-05-15 02:14:02.031135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.214 [2024-05-15 02:14:02.095585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.214 [2024-05-15 02:14:02.095669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.214 [2024-05-15 02:14:02.095809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.214 [2024-05-15 02:14:02.095814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.214 02:14:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:15.635 00:13:15.635 real 0m1.449s 00:13:15.635 user 0m4.547s 00:13:15.635 sys 0m0.096s 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:15.635 ************************************ 00:13:15.635 END TEST accel_decomp_full_mcore 00:13:15.635 ************************************ 00:13:15.635 02:14:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:15.635 02:14:03 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:15.635 02:14:03 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:15.635 02:14:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:15.635 02:14:03 accel -- common/autotest_common.sh@10 -- # set +x 00:13:15.635 ************************************ 00:13:15.635 START TEST accel_decomp_mthread 00:13:15.635 ************************************ 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:15.635 [2024-05-15 02:14:03.375107] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:15.635 [2024-05-15 02:14:03.375201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63575 ] 00:13:15.635 [2024-05-15 02:14:03.507260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.635 [2024-05-15 02:14:03.568452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:15.635 02:14:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.008 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:17.009 ************************************ 00:13:17.009 END TEST accel_decomp_mthread 00:13:17.009 ************************************ 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:17.009 00:13:17.009 real 0m1.400s 00:13:17.009 user 0m1.227s 00:13:17.009 sys 0m0.079s 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:17.009 02:14:04 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:17.009 02:14:04 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:17.009 02:14:04 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:17.009 02:14:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:17.009 02:14:04 accel -- common/autotest_common.sh@10 -- # set +x 00:13:17.009 ************************************ 00:13:17.009 START TEST accel_decomp_full_mthread 00:13:17.009 ************************************ 00:13:17.009 02:14:04 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:17.009 02:14:04 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:17.009 02:14:04 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:17.009 02:14:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.009 02:14:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.009 02:14:04 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:17.009 02:14:04 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:17.009 02:14:04 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:17.009 02:14:04 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:17.009 02:14:04 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:17.009 02:14:04 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:17.009 02:14:04 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:17.009 02:14:04 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:17.009 02:14:04 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:17.009 02:14:04 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:17.009 [2024-05-15 02:14:04.815830] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:17.009 [2024-05-15 02:14:04.815936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63603 ] 00:13:17.009 [2024-05-15 02:14:04.954330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.267 [2024-05-15 02:14:05.044683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.268 02:14:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.642 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:18.642 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:18.642 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:18.642 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.642 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:18.642 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:18.642 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:18.643 ************************************ 00:13:18.643 END TEST accel_decomp_full_mthread 00:13:18.643 ************************************ 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:18.643 00:13:18.643 real 0m1.461s 00:13:18.643 user 0m1.279s 00:13:18.643 sys 0m0.086s 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:18.643 02:14:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:18.643 02:14:06 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:13:18.643 02:14:06 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:18.643 02:14:06 accel -- accel/accel.sh@137 -- # build_accel_config 00:13:18.643 02:14:06 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:18.643 02:14:06 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:18.643 02:14:06 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:18.643 02:14:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:18.643 02:14:06 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:18.643 02:14:06 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:18.643 02:14:06 accel -- common/autotest_common.sh@10 -- # set +x 00:13:18.643 02:14:06 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:18.643 02:14:06 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:18.643 02:14:06 accel -- accel/accel.sh@41 -- # jq -r . 00:13:18.643 ************************************ 00:13:18.643 START TEST accel_dif_functional_tests 00:13:18.643 ************************************ 00:13:18.643 02:14:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:18.643 [2024-05-15 02:14:06.355152] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:18.643 [2024-05-15 02:14:06.355260] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63627 ] 00:13:18.643 [2024-05-15 02:14:06.494784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:18.643 [2024-05-15 02:14:06.578111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.643 [2024-05-15 02:14:06.578196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.643 [2024-05-15 02:14:06.578207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.643 00:13:18.643 00:13:18.643 CUnit - A unit testing framework for C - Version 2.1-3 00:13:18.643 http://cunit.sourceforge.net/ 00:13:18.643 00:13:18.643 00:13:18.643 Suite: accel_dif 00:13:18.643 Test: verify: DIF generated, GUARD check ...passed 00:13:18.643 Test: verify: DIF generated, APPTAG check ...passed 00:13:18.643 Test: verify: DIF generated, REFTAG check ...passed 00:13:18.643 Test: verify: DIF not generated, GUARD check ...passed 00:13:18.643 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 02:14:06.638321] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:18.643 [2024-05-15 02:14:06.638410] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:18.643 [2024-05-15 02:14:06.638451] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:18.643 passed 00:13:18.643 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 02:14:06.638562] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:18.643 passed 00:13:18.643 Test: verify: APPTAG correct, APPTAG check ...passed 00:13:18.643 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 02:14:06.638605] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:18.643 [2024-05-15 02:14:06.638631] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:18.643 [2024-05-15 02:14:06.638691] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:18.643 passed 00:13:18.643 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:18.643 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:18.643 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:18.643 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:13:18.643 Test: generate copy: DIF generated, GUARD check ...passed 00:13:18.643 Test: generate copy: DIF generated, APTTAG check ...[2024-05-15 02:14:06.639052] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:18.643 passed 00:13:18.643 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:18.643 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:18.643 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:18.643 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:18.643 Test: generate copy: iovecs-len validate ...[2024-05-15 02:14:06.639477] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:18.643 passed 00:13:18.643 Test: generate copy: buffer alignment validate ...passed 00:13:18.643 00:13:18.643 Run Summary: Type Total Ran Passed Failed Inactive 00:13:18.643 suites 1 1 n/a 0 0 00:13:18.643 tests 20 20 20 0 0 00:13:18.643 asserts 204 204 204 0 n/a 00:13:18.643 00:13:18.643 Elapsed time = 0.005 seconds 00:13:18.901 00:13:18.901 real 0m0.538s 00:13:18.901 user 0m0.630s 00:13:18.901 sys 0m0.122s 00:13:18.901 02:14:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:18.901 02:14:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:13:18.901 ************************************ 00:13:18.901 END TEST accel_dif_functional_tests 00:13:18.901 ************************************ 00:13:18.901 00:13:18.901 real 0m32.631s 00:13:18.901 user 0m36.137s 00:13:18.901 sys 0m4.948s 00:13:18.901 ************************************ 00:13:18.901 END TEST accel 00:13:18.901 ************************************ 00:13:18.901 02:14:06 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:18.901 02:14:06 accel -- common/autotest_common.sh@10 -- # set +x 00:13:18.901 02:14:06 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:18.901 02:14:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:18.901 02:14:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:18.901 02:14:06 -- common/autotest_common.sh@10 -- # set +x 00:13:18.901 ************************************ 00:13:18.901 START TEST accel_rpc 00:13:18.901 ************************************ 00:13:18.901 02:14:06 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:19.159 * Looking for test storage... 00:13:19.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:19.159 02:14:06 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:19.159 02:14:06 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=63691 00:13:19.159 02:14:06 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:19.159 02:14:06 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 63691 00:13:19.159 02:14:06 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 63691 ']' 00:13:19.159 02:14:06 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.159 02:14:06 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:19.159 02:14:06 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.159 02:14:06 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:19.159 02:14:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.159 [2024-05-15 02:14:07.034039] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:19.159 [2024-05-15 02:14:07.034352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63691 ] 00:13:19.418 [2024-05-15 02:14:07.181218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.418 [2024-05-15 02:14:07.256828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.985 02:14:07 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:19.985 02:14:07 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:19.985 02:14:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:19.985 02:14:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:19.985 02:14:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:19.985 02:14:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:19.985 02:14:07 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:19.985 02:14:07 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:19.985 02:14:07 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:19.985 02:14:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.985 ************************************ 00:13:19.985 START TEST accel_assign_opcode 00:13:19.985 ************************************ 00:13:19.985 02:14:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:13:19.985 02:14:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:19.985 02:14:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.985 02:14:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:19.985 [2024-05-15 02:14:07.989415] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:19.985 02:14:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.985 02:14:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:19.985 02:14:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.985 02:14:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:19.985 [2024-05-15 02:14:07.997401] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:20.243 02:14:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.243 02:14:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:20.243 02:14:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.243 02:14:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:20.243 02:14:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.243 02:14:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:20.243 02:14:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:20.243 02:14:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:13:20.243 02:14:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.243 02:14:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:20.243 02:14:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.243 software 00:13:20.243 ************************************ 00:13:20.243 END TEST accel_assign_opcode 00:13:20.243 ************************************ 00:13:20.243 00:13:20.243 real 0m0.202s 00:13:20.243 user 0m0.047s 00:13:20.243 sys 0m0.009s 00:13:20.243 02:14:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:20.243 02:14:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:20.243 02:14:08 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 63691 00:13:20.243 02:14:08 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 63691 ']' 00:13:20.243 02:14:08 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 63691 00:13:20.243 02:14:08 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:13:20.243 02:14:08 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:20.243 02:14:08 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63691 00:13:20.243 killing process with pid 63691 00:13:20.243 02:14:08 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:20.243 02:14:08 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:20.243 02:14:08 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63691' 00:13:20.243 02:14:08 accel_rpc -- common/autotest_common.sh@965 -- # kill 63691 00:13:20.243 02:14:08 accel_rpc -- common/autotest_common.sh@970 -- # wait 63691 00:13:20.809 00:13:20.809 real 0m1.663s 00:13:20.809 user 0m1.834s 00:13:20.809 sys 0m0.330s 00:13:20.809 ************************************ 00:13:20.809 END TEST accel_rpc 00:13:20.809 ************************************ 00:13:20.809 02:14:08 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:20.809 02:14:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.809 02:14:08 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:20.809 02:14:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:20.809 02:14:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:20.809 02:14:08 -- common/autotest_common.sh@10 -- # set +x 00:13:20.809 ************************************ 00:13:20.809 START TEST app_cmdline 00:13:20.809 ************************************ 00:13:20.809 02:14:08 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:20.809 * Looking for test storage... 00:13:20.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:20.809 02:14:08 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:20.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.809 02:14:08 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63785 00:13:20.809 02:14:08 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63785 00:13:20.809 02:14:08 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:20.809 02:14:08 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 63785 ']' 00:13:20.809 02:14:08 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.809 02:14:08 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:20.809 02:14:08 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.809 02:14:08 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:20.809 02:14:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:20.809 [2024-05-15 02:14:08.741714] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:20.809 [2024-05-15 02:14:08.741821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63785 ] 00:13:21.067 [2024-05-15 02:14:08.881273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.067 [2024-05-15 02:14:08.965684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.001 02:14:09 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:22.001 02:14:09 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:13:22.001 02:14:09 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:22.001 { 00:13:22.001 "fields": { 00:13:22.001 "commit": "2dc74a001", 00:13:22.001 "major": 24, 00:13:22.001 "minor": 5, 00:13:22.001 "patch": 0, 00:13:22.001 "suffix": "-pre" 00:13:22.001 }, 00:13:22.001 "version": "SPDK v24.05-pre git sha1 2dc74a001" 00:13:22.001 } 00:13:22.260 02:14:10 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:22.260 02:14:10 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:22.260 02:14:10 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:22.260 02:14:10 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:22.260 02:14:10 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:22.260 02:14:10 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:22.260 02:14:10 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.260 02:14:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:22.260 02:14:10 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:22.260 02:14:10 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.260 02:14:10 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:22.260 02:14:10 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:22.260 02:14:10 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:22.260 02:14:10 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:13:22.260 02:14:10 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:22.260 02:14:10 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:22.260 02:14:10 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:22.260 02:14:10 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:22.260 02:14:10 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:22.260 02:14:10 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:22.260 02:14:10 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:22.260 02:14:10 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:22.260 02:14:10 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:22.260 02:14:10 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:22.519 2024/05/15 02:14:10 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:13:22.519 request: 00:13:22.519 { 00:13:22.519 "method": "env_dpdk_get_mem_stats", 00:13:22.519 "params": {} 00:13:22.519 } 00:13:22.519 Got JSON-RPC error response 00:13:22.519 GoRPCClient: error on JSON-RPC call 00:13:22.519 02:14:10 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:13:22.519 02:14:10 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:22.519 02:14:10 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:22.519 02:14:10 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:22.519 02:14:10 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63785 00:13:22.519 02:14:10 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 63785 ']' 00:13:22.519 02:14:10 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 63785 00:13:22.519 02:14:10 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:13:22.519 02:14:10 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:22.519 02:14:10 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63785 00:13:22.519 02:14:10 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:22.519 02:14:10 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:22.519 killing process with pid 63785 00:13:22.519 02:14:10 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63785' 00:13:22.519 02:14:10 app_cmdline -- common/autotest_common.sh@965 -- # kill 63785 00:13:22.519 02:14:10 app_cmdline -- common/autotest_common.sh@970 -- # wait 63785 00:13:22.778 00:13:22.778 real 0m2.074s 00:13:22.778 user 0m2.754s 00:13:22.778 sys 0m0.404s 00:13:22.778 02:14:10 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:22.778 02:14:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:22.778 ************************************ 00:13:22.778 END TEST app_cmdline 00:13:22.778 ************************************ 00:13:22.778 02:14:10 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:22.778 02:14:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:22.778 02:14:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:22.778 02:14:10 -- common/autotest_common.sh@10 -- # set +x 00:13:22.778 ************************************ 00:13:22.778 START TEST version 00:13:22.778 ************************************ 00:13:22.778 02:14:10 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:23.037 * Looking for test storage... 00:13:23.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:23.037 02:14:10 version -- app/version.sh@17 -- # get_header_version major 00:13:23.037 02:14:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:23.037 02:14:10 version -- app/version.sh@14 -- # tr -d '"' 00:13:23.037 02:14:10 version -- app/version.sh@14 -- # cut -f2 00:13:23.037 02:14:10 version -- app/version.sh@17 -- # major=24 00:13:23.037 02:14:10 version -- app/version.sh@18 -- # get_header_version minor 00:13:23.037 02:14:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:23.037 02:14:10 version -- app/version.sh@14 -- # cut -f2 00:13:23.037 02:14:10 version -- app/version.sh@14 -- # tr -d '"' 00:13:23.037 02:14:10 version -- app/version.sh@18 -- # minor=5 00:13:23.037 02:14:10 version -- app/version.sh@19 -- # get_header_version patch 00:13:23.038 02:14:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:23.038 02:14:10 version -- app/version.sh@14 -- # cut -f2 00:13:23.038 02:14:10 version -- app/version.sh@14 -- # tr -d '"' 00:13:23.038 02:14:10 version -- app/version.sh@19 -- # patch=0 00:13:23.038 02:14:10 version -- app/version.sh@20 -- # get_header_version suffix 00:13:23.038 02:14:10 version -- app/version.sh@14 -- # cut -f2 00:13:23.038 02:14:10 version -- app/version.sh@14 -- # tr -d '"' 00:13:23.038 02:14:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:23.038 02:14:10 version -- app/version.sh@20 -- # suffix=-pre 00:13:23.038 02:14:10 version -- app/version.sh@22 -- # version=24.5 00:13:23.038 02:14:10 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:23.038 02:14:10 version -- app/version.sh@28 -- # version=24.5rc0 00:13:23.038 02:14:10 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:23.038 02:14:10 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:23.038 02:14:10 version -- app/version.sh@30 -- # py_version=24.5rc0 00:13:23.038 02:14:10 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:13:23.038 00:13:23.038 real 0m0.154s 00:13:23.038 user 0m0.095s 00:13:23.038 sys 0m0.087s 00:13:23.038 02:14:10 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:23.038 02:14:10 version -- common/autotest_common.sh@10 -- # set +x 00:13:23.038 ************************************ 00:13:23.038 END TEST version 00:13:23.038 ************************************ 00:13:23.038 02:14:10 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:13:23.038 02:14:10 -- spdk/autotest.sh@194 -- # uname -s 00:13:23.038 02:14:10 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:13:23.038 02:14:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:23.038 02:14:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:23.038 02:14:10 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:13:23.038 02:14:10 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:13:23.038 02:14:10 -- spdk/autotest.sh@256 -- # timing_exit lib 00:13:23.038 02:14:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:23.038 02:14:10 -- common/autotest_common.sh@10 -- # set +x 00:13:23.038 02:14:10 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:13:23.038 02:14:10 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:13:23.038 02:14:10 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:13:23.038 02:14:10 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:13:23.038 02:14:10 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:13:23.038 02:14:10 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:13:23.038 02:14:10 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:23.038 02:14:10 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:23.038 02:14:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:23.038 02:14:10 -- common/autotest_common.sh@10 -- # set +x 00:13:23.038 ************************************ 00:13:23.038 START TEST nvmf_tcp 00:13:23.038 ************************************ 00:13:23.038 02:14:10 nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:23.298 * Looking for test storage... 00:13:23.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:23.298 02:14:11 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.298 02:14:11 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.298 02:14:11 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.298 02:14:11 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.298 02:14:11 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.298 02:14:11 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.298 02:14:11 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:13:23.298 02:14:11 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:13:23.298 02:14:11 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:23.298 02:14:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:13:23.298 02:14:11 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:23.298 02:14:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:23.298 02:14:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:23.298 02:14:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:23.298 ************************************ 00:13:23.298 START TEST nvmf_example 00:13:23.298 ************************************ 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:23.298 * Looking for test storage... 00:13:23.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.298 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:23.299 Cannot find device "nvmf_init_br" 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:23.299 Cannot find device "nvmf_tgt_br" 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:23.299 Cannot find device "nvmf_tgt_br2" 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:23.299 Cannot find device "nvmf_init_br" 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:23.299 Cannot find device "nvmf_tgt_br" 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:23.299 Cannot find device "nvmf_tgt_br2" 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:13:23.299 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:23.300 Cannot find device "nvmf_br" 00:13:23.300 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:13:23.300 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:23.300 Cannot find device "nvmf_init_if" 00:13:23.300 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:13:23.300 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:23.300 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:23.300 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:13:23.300 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:23.300 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:23.300 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:13:23.300 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:23.559 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:23.559 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:23.559 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:23.559 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:23.559 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:23.559 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:23.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:13:23.560 00:13:23.560 --- 10.0.0.2 ping statistics --- 00:13:23.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.560 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:23.560 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:23.560 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:13:23.560 00:13:23.560 --- 10.0.0.3 ping statistics --- 00:13:23.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.560 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:23.560 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:23.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:13:23.820 00:13:23.820 --- 10.0.0.1 ping statistics --- 00:13:23.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.820 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:23.820 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.820 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:13:23.820 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:23.820 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=64115 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 64115 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 64115 ']' 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:23.821 02:14:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:24.756 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:24.756 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:13:24.756 02:14:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:24.756 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:24.756 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:25.014 02:14:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:37.228 Initializing NVMe Controllers 00:13:37.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:37.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:37.228 Initialization complete. Launching workers. 00:13:37.228 ======================================================== 00:13:37.228 Latency(us) 00:13:37.228 Device Information : IOPS MiB/s Average min max 00:13:37.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13625.20 53.22 4698.17 780.63 23190.74 00:13:37.228 ======================================================== 00:13:37.228 Total : 13625.20 53.22 4698.17 780.63 23190.74 00:13:37.228 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:37.228 rmmod nvme_tcp 00:13:37.228 rmmod nvme_fabrics 00:13:37.228 rmmod nvme_keyring 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 64115 ']' 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 64115 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 64115 ']' 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 64115 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64115 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:13:37.228 killing process with pid 64115 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64115' 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 64115 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 64115 00:13:37.228 nvmf threads initialize successfully 00:13:37.228 bdev subsystem init successfully 00:13:37.228 created a nvmf target service 00:13:37.228 create targets's poll groups done 00:13:37.228 all subsystems of target started 00:13:37.228 nvmf target is running 00:13:37.228 all subsystems of target stopped 00:13:37.228 destroy targets's poll groups done 00:13:37.228 destroyed the nvmf target service 00:13:37.228 bdev subsystem finish successfully 00:13:37.228 nvmf threads destroy successfully 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:37.228 02:14:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:37.229 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:37.229 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:37.229 00:13:37.229 real 0m12.364s 00:13:37.229 user 0m44.400s 00:13:37.229 sys 0m2.019s 00:13:37.229 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:37.229 02:14:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:37.229 ************************************ 00:13:37.229 END TEST nvmf_example 00:13:37.229 ************************************ 00:13:37.229 02:14:23 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:37.229 02:14:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:37.229 02:14:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:37.229 02:14:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:37.229 ************************************ 00:13:37.229 START TEST nvmf_filesystem 00:13:37.229 ************************************ 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:37.229 * Looking for test storage... 00:13:37.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:37.229 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:37.230 #define SPDK_CONFIG_H 00:13:37.230 #define SPDK_CONFIG_APPS 1 00:13:37.230 #define SPDK_CONFIG_ARCH native 00:13:37.230 #undef SPDK_CONFIG_ASAN 00:13:37.230 #define SPDK_CONFIG_AVAHI 1 00:13:37.230 #undef SPDK_CONFIG_CET 00:13:37.230 #define SPDK_CONFIG_COVERAGE 1 00:13:37.230 #define SPDK_CONFIG_CROSS_PREFIX 00:13:37.230 #undef SPDK_CONFIG_CRYPTO 00:13:37.230 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:37.230 #undef SPDK_CONFIG_CUSTOMOCF 00:13:37.230 #undef SPDK_CONFIG_DAOS 00:13:37.230 #define SPDK_CONFIG_DAOS_DIR 00:13:37.230 #define SPDK_CONFIG_DEBUG 1 00:13:37.230 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:37.230 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:37.230 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:37.230 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:37.230 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:37.230 #undef SPDK_CONFIG_DPDK_UADK 00:13:37.230 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:37.230 #define SPDK_CONFIG_EXAMPLES 1 00:13:37.230 #undef SPDK_CONFIG_FC 00:13:37.230 #define SPDK_CONFIG_FC_PATH 00:13:37.230 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:37.230 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:37.230 #undef SPDK_CONFIG_FUSE 00:13:37.230 #undef SPDK_CONFIG_FUZZER 00:13:37.230 #define SPDK_CONFIG_FUZZER_LIB 00:13:37.230 #define SPDK_CONFIG_GOLANG 1 00:13:37.230 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:37.230 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:37.230 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:37.230 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:13:37.230 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:37.230 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:37.230 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:37.230 #define SPDK_CONFIG_IDXD 1 00:13:37.230 #undef SPDK_CONFIG_IDXD_KERNEL 00:13:37.230 #undef SPDK_CONFIG_IPSEC_MB 00:13:37.230 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:37.230 #define SPDK_CONFIG_ISAL 1 00:13:37.230 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:37.230 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:37.230 #define SPDK_CONFIG_LIBDIR 00:13:37.230 #undef SPDK_CONFIG_LTO 00:13:37.230 #define SPDK_CONFIG_MAX_LCORES 00:13:37.230 #define SPDK_CONFIG_NVME_CUSE 1 00:13:37.230 #undef SPDK_CONFIG_OCF 00:13:37.230 #define SPDK_CONFIG_OCF_PATH 00:13:37.230 #define SPDK_CONFIG_OPENSSL_PATH 00:13:37.230 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:37.230 #define SPDK_CONFIG_PGO_DIR 00:13:37.230 #undef SPDK_CONFIG_PGO_USE 00:13:37.230 #define SPDK_CONFIG_PREFIX /usr/local 00:13:37.230 #undef SPDK_CONFIG_RAID5F 00:13:37.230 #undef SPDK_CONFIG_RBD 00:13:37.230 #define SPDK_CONFIG_RDMA 1 00:13:37.230 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:37.230 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:37.230 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:37.230 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:37.230 #define SPDK_CONFIG_SHARED 1 00:13:37.230 #undef SPDK_CONFIG_SMA 00:13:37.230 #define SPDK_CONFIG_TESTS 1 00:13:37.230 #undef SPDK_CONFIG_TSAN 00:13:37.230 #define SPDK_CONFIG_UBLK 1 00:13:37.230 #define SPDK_CONFIG_UBSAN 1 00:13:37.230 #undef SPDK_CONFIG_UNIT_TESTS 00:13:37.230 #undef SPDK_CONFIG_URING 00:13:37.230 #define SPDK_CONFIG_URING_PATH 00:13:37.230 #undef SPDK_CONFIG_URING_ZNS 00:13:37.230 #define SPDK_CONFIG_USDT 1 00:13:37.230 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:37.230 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:37.230 #undef SPDK_CONFIG_VFIO_USER 00:13:37.230 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:37.230 #define SPDK_CONFIG_VHOST 1 00:13:37.230 #define SPDK_CONFIG_VIRTIO 1 00:13:37.230 #undef SPDK_CONFIG_VTUNE 00:13:37.230 #define SPDK_CONFIG_VTUNE_DIR 00:13:37.230 #define SPDK_CONFIG_WERROR 1 00:13:37.230 #define SPDK_CONFIG_WPDK_DIR 00:13:37.230 #undef SPDK_CONFIG_XNVME 00:13:37.230 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:13:37.230 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 1 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 1 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 1 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:37.231 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 64290 ]] 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 64290 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.D1U4NU 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.D1U4NU/tests/target /tmp/spdk.D1U4NU 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=devtmpfs 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=4194304 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=4194304 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6264516608 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267891712 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=2494353408 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=2507157504 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12804096 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=13814169600 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5210324992 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=13814169600 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5210324992 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6267756544 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267895808 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=139264 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda2 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext4 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=843546624 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1012768768 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=100016128 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda3 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=vfat 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=92499968 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=104607744 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12107776 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=1253572608 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253576704 00:13:37.232 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=94523482112 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5179297792 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:13:37.233 * Looking for test storage... 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/home 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=13814169600 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == tmpfs ]] 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == ramfs ]] 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ /home == / ]] 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:37.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:37.233 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:37.234 Cannot find device "nvmf_tgt_br" 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:37.234 Cannot find device "nvmf_tgt_br2" 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:37.234 Cannot find device "nvmf_tgt_br" 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:37.234 Cannot find device "nvmf_tgt_br2" 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:37.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:37.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:37.234 02:14:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:37.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:13:37.234 00:13:37.234 --- 10.0.0.2 ping statistics --- 00:13:37.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.234 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:37.234 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:37.234 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:13:37.234 00:13:37.234 --- 10.0.0.3 ping statistics --- 00:13:37.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.234 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:37.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:13:37.234 00:13:37.234 --- 10.0.0.1 ping statistics --- 00:13:37.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.234 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:37.234 ************************************ 00:13:37.234 START TEST nvmf_filesystem_no_in_capsule 00:13:37.234 ************************************ 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=64453 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 64453 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 64453 ']' 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:37.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:37.234 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.234 [2024-05-15 02:14:24.179251] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:37.234 [2024-05-15 02:14:24.179914] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.234 [2024-05-15 02:14:24.320861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.234 [2024-05-15 02:14:24.385919] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.234 [2024-05-15 02:14:24.385974] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.234 [2024-05-15 02:14:24.385986] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.234 [2024-05-15 02:14:24.385994] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.235 [2024-05-15 02:14:24.386001] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.235 [2024-05-15 02:14:24.386124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.235 [2024-05-15 02:14:24.386165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.235 [2024-05-15 02:14:24.386232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.235 [2024-05-15 02:14:24.386239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.235 [2024-05-15 02:14:24.526843] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.235 Malloc1 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.235 [2024-05-15 02:14:24.649394] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:37.235 [2024-05-15 02:14:24.649677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:13:37.235 { 00:13:37.235 "aliases": [ 00:13:37.235 "19507380-4443-4050-82f5-a6d56697cb70" 00:13:37.235 ], 00:13:37.235 "assigned_rate_limits": { 00:13:37.235 "r_mbytes_per_sec": 0, 00:13:37.235 "rw_ios_per_sec": 0, 00:13:37.235 "rw_mbytes_per_sec": 0, 00:13:37.235 "w_mbytes_per_sec": 0 00:13:37.235 }, 00:13:37.235 "block_size": 512, 00:13:37.235 "claim_type": "exclusive_write", 00:13:37.235 "claimed": true, 00:13:37.235 "driver_specific": {}, 00:13:37.235 "memory_domains": [ 00:13:37.235 { 00:13:37.235 "dma_device_id": "system", 00:13:37.235 "dma_device_type": 1 00:13:37.235 }, 00:13:37.235 { 00:13:37.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.235 "dma_device_type": 2 00:13:37.235 } 00:13:37.235 ], 00:13:37.235 "name": "Malloc1", 00:13:37.235 "num_blocks": 1048576, 00:13:37.235 "product_name": "Malloc disk", 00:13:37.235 "supported_io_types": { 00:13:37.235 "abort": true, 00:13:37.235 "compare": false, 00:13:37.235 "compare_and_write": false, 00:13:37.235 "flush": true, 00:13:37.235 "nvme_admin": false, 00:13:37.235 "nvme_io": false, 00:13:37.235 "read": true, 00:13:37.235 "reset": true, 00:13:37.235 "unmap": true, 00:13:37.235 "write": true, 00:13:37.235 "write_zeroes": true 00:13:37.235 }, 00:13:37.235 "uuid": "19507380-4443-4050-82f5-a6d56697cb70", 00:13:37.235 "zoned": false 00:13:37.235 } 00:13:37.235 ]' 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:37.235 02:14:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:39.138 02:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:39.138 02:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:39.138 02:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:40.511 ************************************ 00:13:40.511 START TEST filesystem_ext4 00:13:40.511 ************************************ 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:40.511 mke2fs 1.46.5 (30-Dec-2021) 00:13:40.511 Discarding device blocks: 0/522240 done 00:13:40.511 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:40.511 Filesystem UUID: ccc7dbf9-5806-4950-a268-a0ce98a13118 00:13:40.511 Superblock backups stored on blocks: 00:13:40.511 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:40.511 00:13:40.511 Allocating group tables: 0/64 done 00:13:40.511 Writing inode tables: 0/64 done 00:13:40.511 Creating journal (8192 blocks): done 00:13:40.511 Writing superblocks and filesystem accounting information: 0/64 done 00:13:40.511 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 64453 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:40.511 ************************************ 00:13:40.511 END TEST filesystem_ext4 00:13:40.511 ************************************ 00:13:40.511 00:13:40.511 real 0m0.374s 00:13:40.511 user 0m0.018s 00:13:40.511 sys 0m0.049s 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:40.511 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:40.770 ************************************ 00:13:40.770 START TEST filesystem_btrfs 00:13:40.770 ************************************ 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:40.770 btrfs-progs v6.6.2 00:13:40.770 See https://btrfs.readthedocs.io for more information. 00:13:40.770 00:13:40.770 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:40.770 NOTE: several default settings have changed in version 5.15, please make sure 00:13:40.770 this does not affect your deployments: 00:13:40.770 - DUP for metadata (-m dup) 00:13:40.770 - enabled no-holes (-O no-holes) 00:13:40.770 - enabled free-space-tree (-R free-space-tree) 00:13:40.770 00:13:40.770 Label: (null) 00:13:40.770 UUID: a03c9efb-f631-4c1e-8f9d-df332a391b19 00:13:40.770 Node size: 16384 00:13:40.770 Sector size: 4096 00:13:40.770 Filesystem size: 510.00MiB 00:13:40.770 Block group profiles: 00:13:40.770 Data: single 8.00MiB 00:13:40.770 Metadata: DUP 32.00MiB 00:13:40.770 System: DUP 8.00MiB 00:13:40.770 SSD detected: yes 00:13:40.770 Zoned device: no 00:13:40.770 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:40.770 Runtime features: free-space-tree 00:13:40.770 Checksum: crc32c 00:13:40.770 Number of devices: 1 00:13:40.770 Devices: 00:13:40.770 ID SIZE PATH 00:13:40.770 1 510.00MiB /dev/nvme0n1p1 00:13:40.770 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:40.770 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 64453 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:40.771 ************************************ 00:13:40.771 END TEST filesystem_btrfs 00:13:40.771 ************************************ 00:13:40.771 00:13:40.771 real 0m0.170s 00:13:40.771 user 0m0.020s 00:13:40.771 sys 0m0.060s 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:40.771 ************************************ 00:13:40.771 START TEST filesystem_xfs 00:13:40.771 ************************************ 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:13:40.771 02:14:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:41.029 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:41.029 = sectsz=512 attr=2, projid32bit=1 00:13:41.029 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:41.029 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:41.029 data = bsize=4096 blocks=130560, imaxpct=25 00:13:41.029 = sunit=0 swidth=0 blks 00:13:41.029 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:41.029 log =internal log bsize=4096 blocks=16384, version=2 00:13:41.029 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:41.029 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:41.596 Discarding blocks...Done. 00:13:41.596 02:14:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:13:41.596 02:14:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 64453 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:44.128 ************************************ 00:13:44.128 END TEST filesystem_xfs 00:13:44.128 ************************************ 00:13:44.128 00:13:44.128 real 0m3.025s 00:13:44.128 user 0m0.023s 00:13:44.128 sys 0m0.046s 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 64453 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 64453 ']' 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 64453 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64453 00:13:44.128 killing process with pid 64453 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64453' 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 64453 00:13:44.128 [2024-05-15 02:14:31.966793] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:44.128 02:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 64453 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:44.387 ************************************ 00:13:44.387 END TEST nvmf_filesystem_no_in_capsule 00:13:44.387 ************************************ 00:13:44.387 00:13:44.387 real 0m8.204s 00:13:44.387 user 0m30.318s 00:13:44.387 sys 0m1.636s 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:44.387 ************************************ 00:13:44.387 START TEST nvmf_filesystem_in_capsule 00:13:44.387 ************************************ 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=64697 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 64697 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 64697 ']' 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:44.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:44.387 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.646 [2024-05-15 02:14:32.438014] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:44.646 [2024-05-15 02:14:32.438142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.646 [2024-05-15 02:14:32.576588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:44.646 [2024-05-15 02:14:32.636952] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.646 [2024-05-15 02:14:32.637018] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.646 [2024-05-15 02:14:32.637030] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.646 [2024-05-15 02:14:32.637039] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.646 [2024-05-15 02:14:32.637047] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.646 [2024-05-15 02:14:32.637150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.646 [2024-05-15 02:14:32.637240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.646 [2024-05-15 02:14:32.637635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.646 [2024-05-15 02:14:32.637643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.903 [2024-05-15 02:14:32.762292] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.903 Malloc1 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.903 [2024-05-15 02:14:32.884950] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:44.903 [2024-05-15 02:14:32.885292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.903 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:44.904 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:13:44.904 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:13:44.904 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:13:44.904 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:13:44.904 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:44.904 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.904 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.904 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.904 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:13:44.904 { 00:13:44.904 "aliases": [ 00:13:44.904 "48186231-4b2e-444b-9d26-fcfd603f6873" 00:13:44.904 ], 00:13:44.904 "assigned_rate_limits": { 00:13:44.904 "r_mbytes_per_sec": 0, 00:13:44.904 "rw_ios_per_sec": 0, 00:13:44.904 "rw_mbytes_per_sec": 0, 00:13:44.904 "w_mbytes_per_sec": 0 00:13:44.904 }, 00:13:44.904 "block_size": 512, 00:13:44.904 "claim_type": "exclusive_write", 00:13:44.904 "claimed": true, 00:13:44.904 "driver_specific": {}, 00:13:44.904 "memory_domains": [ 00:13:44.904 { 00:13:44.904 "dma_device_id": "system", 00:13:44.904 "dma_device_type": 1 00:13:44.904 }, 00:13:44.904 { 00:13:44.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.904 "dma_device_type": 2 00:13:44.904 } 00:13:44.904 ], 00:13:44.904 "name": "Malloc1", 00:13:44.904 "num_blocks": 1048576, 00:13:44.904 "product_name": "Malloc disk", 00:13:44.904 "supported_io_types": { 00:13:44.904 "abort": true, 00:13:44.904 "compare": false, 00:13:44.904 "compare_and_write": false, 00:13:44.904 "flush": true, 00:13:44.904 "nvme_admin": false, 00:13:44.904 "nvme_io": false, 00:13:44.904 "read": true, 00:13:44.904 "reset": true, 00:13:44.904 "unmap": true, 00:13:44.904 "write": true, 00:13:44.904 "write_zeroes": true 00:13:44.904 }, 00:13:44.904 "uuid": "48186231-4b2e-444b-9d26-fcfd603f6873", 00:13:44.904 "zoned": false 00:13:44.904 } 00:13:44.904 ]' 00:13:44.904 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:13:45.161 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:13:45.161 02:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:13:45.161 02:14:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:13:45.161 02:14:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:13:45.161 02:14:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:13:45.161 02:14:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:45.161 02:14:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.419 02:14:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:45.419 02:14:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:13:45.419 02:14:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.419 02:14:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:45.419 02:14:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:13:47.318 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:47.318 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:47.318 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.318 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:47.318 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.318 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:13:47.318 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:47.318 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:47.318 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:47.318 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:47.318 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:47.318 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:47.319 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:47.319 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:47.319 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:47.319 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:47.319 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:47.319 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:47.319 02:14:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:48.693 ************************************ 00:13:48.693 START TEST filesystem_in_capsule_ext4 00:13:48.693 ************************************ 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:13:48.693 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:48.693 mke2fs 1.46.5 (30-Dec-2021) 00:13:48.693 Discarding device blocks: 0/522240 done 00:13:48.693 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:48.693 Filesystem UUID: 79e2b21f-020b-4627-839e-831438fda7c4 00:13:48.693 Superblock backups stored on blocks: 00:13:48.693 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:48.693 00:13:48.693 Allocating group tables: 0/64 done 00:13:48.693 Writing inode tables: 0/64 done 00:13:48.693 Creating journal (8192 blocks): done 00:13:48.693 Writing superblocks and filesystem accounting information: 0/64 done 00:13:48.693 00:13:48.694 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:13:48.694 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:48.694 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:48.694 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:48.694 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:48.694 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:48.694 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:48.694 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:48.694 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 64697 00:13:48.694 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:48.694 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:48.694 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:48.694 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:48.694 00:13:48.694 real 0m0.345s 00:13:48.694 user 0m0.021s 00:13:48.694 sys 0m0.055s 00:13:48.694 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:48.694 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:48.694 ************************************ 00:13:48.694 END TEST filesystem_in_capsule_ext4 00:13:48.694 ************************************ 00:13:48.952 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:48.952 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:48.952 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:48.952 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:48.952 ************************************ 00:13:48.952 START TEST filesystem_in_capsule_btrfs 00:13:48.952 ************************************ 00:13:48.952 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:48.953 btrfs-progs v6.6.2 00:13:48.953 See https://btrfs.readthedocs.io for more information. 00:13:48.953 00:13:48.953 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:48.953 NOTE: several default settings have changed in version 5.15, please make sure 00:13:48.953 this does not affect your deployments: 00:13:48.953 - DUP for metadata (-m dup) 00:13:48.953 - enabled no-holes (-O no-holes) 00:13:48.953 - enabled free-space-tree (-R free-space-tree) 00:13:48.953 00:13:48.953 Label: (null) 00:13:48.953 UUID: 607770c9-2043-46f8-9996-a2355b0610ab 00:13:48.953 Node size: 16384 00:13:48.953 Sector size: 4096 00:13:48.953 Filesystem size: 510.00MiB 00:13:48.953 Block group profiles: 00:13:48.953 Data: single 8.00MiB 00:13:48.953 Metadata: DUP 32.00MiB 00:13:48.953 System: DUP 8.00MiB 00:13:48.953 SSD detected: yes 00:13:48.953 Zoned device: no 00:13:48.953 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:48.953 Runtime features: free-space-tree 00:13:48.953 Checksum: crc32c 00:13:48.953 Number of devices: 1 00:13:48.953 Devices: 00:13:48.953 ID SIZE PATH 00:13:48.953 1 510.00MiB /dev/nvme0n1p1 00:13:48.953 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 64697 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:48.953 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:49.213 00:13:49.213 real 0m0.242s 00:13:49.213 user 0m0.015s 00:13:49.213 sys 0m0.062s 00:13:49.213 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:49.213 ************************************ 00:13:49.213 END TEST filesystem_in_capsule_btrfs 00:13:49.213 ************************************ 00:13:49.213 02:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:49.213 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:49.213 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:49.213 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:49.213 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.213 ************************************ 00:13:49.213 START TEST filesystem_in_capsule_xfs 00:13:49.213 ************************************ 00:13:49.213 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:13:49.213 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:49.213 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:49.213 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:49.213 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:13:49.213 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:13:49.213 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:13:49.213 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:13:49.213 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:13:49.213 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:13:49.213 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:49.213 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:49.213 = sectsz=512 attr=2, projid32bit=1 00:13:49.213 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:49.213 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:49.213 data = bsize=4096 blocks=130560, imaxpct=25 00:13:49.213 = sunit=0 swidth=0 blks 00:13:49.213 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:49.213 log =internal log bsize=4096 blocks=16384, version=2 00:13:49.213 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:49.213 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:49.779 Discarding blocks...Done. 00:13:49.779 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:13:49.779 02:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 64697 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:51.676 00:13:51.676 real 0m2.560s 00:13:51.676 user 0m0.015s 00:13:51.676 sys 0m0.050s 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:51.676 ************************************ 00:13:51.676 END TEST filesystem_in_capsule_xfs 00:13:51.676 ************************************ 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:51.676 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 64697 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 64697 ']' 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 64697 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64697 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:51.935 killing process with pid 64697 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64697' 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 64697 00:13:51.935 [2024-05-15 02:14:39.754127] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:51.935 02:14:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 64697 00:13:52.194 02:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:52.194 00:13:52.194 real 0m7.697s 00:13:52.194 user 0m28.680s 00:13:52.194 sys 0m1.455s 00:13:52.194 02:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:52.194 02:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:52.194 ************************************ 00:13:52.194 END TEST nvmf_filesystem_in_capsule 00:13:52.194 ************************************ 00:13:52.194 02:14:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:52.194 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:52.194 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:52.195 rmmod nvme_tcp 00:13:52.195 rmmod nvme_fabrics 00:13:52.195 rmmod nvme_keyring 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.195 02:14:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.453 02:14:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:52.453 00:13:52.453 real 0m16.708s 00:13:52.453 user 0m59.231s 00:13:52.453 sys 0m3.442s 00:13:52.453 02:14:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:52.453 02:14:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.453 ************************************ 00:13:52.453 END TEST nvmf_filesystem 00:13:52.453 ************************************ 00:13:52.453 02:14:40 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:52.453 02:14:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:52.453 02:14:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:52.453 02:14:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:52.453 ************************************ 00:13:52.453 START TEST nvmf_target_discovery 00:13:52.453 ************************************ 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:52.453 * Looking for test storage... 00:13:52.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.453 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:52.454 Cannot find device "nvmf_tgt_br" 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:52.454 Cannot find device "nvmf_tgt_br2" 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:52.454 Cannot find device "nvmf_tgt_br" 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:52.454 Cannot find device "nvmf_tgt_br2" 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:13:52.454 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:52.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:52.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:52.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:13:52.711 00:13:52.711 --- 10.0.0.2 ping statistics --- 00:13:52.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.711 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:52.711 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:52.711 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:13:52.711 00:13:52.711 --- 10.0.0.3 ping statistics --- 00:13:52.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.711 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:52.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:13:52.711 00:13:52.711 --- 10.0.0.1 ping statistics --- 00:13:52.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.711 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=65082 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 65082 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 65082 ']' 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:52.711 02:14:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.968 [2024-05-15 02:14:40.766681] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:52.968 [2024-05-15 02:14:40.766772] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.968 [2024-05-15 02:14:40.901888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.968 [2024-05-15 02:14:40.963897] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.968 [2024-05-15 02:14:40.963961] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.968 [2024-05-15 02:14:40.963974] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.968 [2024-05-15 02:14:40.963983] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.968 [2024-05-15 02:14:40.963991] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.968 [2024-05-15 02:14:40.964220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.968 [2024-05-15 02:14:40.965439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.968 [2024-05-15 02:14:40.965505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.968 [2024-05-15 02:14:40.965515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.226 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:53.226 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:13:53.226 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.226 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:53.226 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.226 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.226 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.226 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.226 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.226 [2024-05-15 02:14:41.112430] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.227 Null1 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.227 [2024-05-15 02:14:41.171209] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:53.227 [2024-05-15 02:14:41.171662] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.227 Null2 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.227 Null3 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.227 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.486 Null4 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.486 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -a 10.0.0.2 -s 4420 00:13:53.486 00:13:53.486 Discovery Log Number of Records 6, Generation counter 6 00:13:53.486 =====Discovery Log Entry 0====== 00:13:53.486 trtype: tcp 00:13:53.486 adrfam: ipv4 00:13:53.486 subtype: current discovery subsystem 00:13:53.486 treq: not required 00:13:53.486 portid: 0 00:13:53.486 trsvcid: 4420 00:13:53.486 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:53.486 traddr: 10.0.0.2 00:13:53.486 eflags: explicit discovery connections, duplicate discovery information 00:13:53.486 sectype: none 00:13:53.486 =====Discovery Log Entry 1====== 00:13:53.486 trtype: tcp 00:13:53.486 adrfam: ipv4 00:13:53.486 subtype: nvme subsystem 00:13:53.486 treq: not required 00:13:53.486 portid: 0 00:13:53.486 trsvcid: 4420 00:13:53.486 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:53.486 traddr: 10.0.0.2 00:13:53.486 eflags: none 00:13:53.486 sectype: none 00:13:53.486 =====Discovery Log Entry 2====== 00:13:53.486 trtype: tcp 00:13:53.486 adrfam: ipv4 00:13:53.486 subtype: nvme subsystem 00:13:53.486 treq: not required 00:13:53.486 portid: 0 00:13:53.486 trsvcid: 4420 00:13:53.486 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:53.486 traddr: 10.0.0.2 00:13:53.486 eflags: none 00:13:53.486 sectype: none 00:13:53.486 =====Discovery Log Entry 3====== 00:13:53.486 trtype: tcp 00:13:53.486 adrfam: ipv4 00:13:53.486 subtype: nvme subsystem 00:13:53.486 treq: not required 00:13:53.486 portid: 0 00:13:53.486 trsvcid: 4420 00:13:53.486 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:53.486 traddr: 10.0.0.2 00:13:53.486 eflags: none 00:13:53.486 sectype: none 00:13:53.486 =====Discovery Log Entry 4====== 00:13:53.487 trtype: tcp 00:13:53.487 adrfam: ipv4 00:13:53.487 subtype: nvme subsystem 00:13:53.487 treq: not required 00:13:53.487 portid: 0 00:13:53.487 trsvcid: 4420 00:13:53.487 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:53.487 traddr: 10.0.0.2 00:13:53.487 eflags: none 00:13:53.487 sectype: none 00:13:53.487 =====Discovery Log Entry 5====== 00:13:53.487 trtype: tcp 00:13:53.487 adrfam: ipv4 00:13:53.487 subtype: discovery subsystem referral 00:13:53.487 treq: not required 00:13:53.487 portid: 0 00:13:53.487 trsvcid: 4430 00:13:53.487 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:53.487 traddr: 10.0.0.2 00:13:53.487 eflags: none 00:13:53.487 sectype: none 00:13:53.487 Perform nvmf subsystem discovery via RPC 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 [ 00:13:53.487 { 00:13:53.487 "allow_any_host": true, 00:13:53.487 "hosts": [], 00:13:53.487 "listen_addresses": [ 00:13:53.487 { 00:13:53.487 "adrfam": "IPv4", 00:13:53.487 "traddr": "10.0.0.2", 00:13:53.487 "trsvcid": "4420", 00:13:53.487 "trtype": "TCP" 00:13:53.487 } 00:13:53.487 ], 00:13:53.487 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:53.487 "subtype": "Discovery" 00:13:53.487 }, 00:13:53.487 { 00:13:53.487 "allow_any_host": true, 00:13:53.487 "hosts": [], 00:13:53.487 "listen_addresses": [ 00:13:53.487 { 00:13:53.487 "adrfam": "IPv4", 00:13:53.487 "traddr": "10.0.0.2", 00:13:53.487 "trsvcid": "4420", 00:13:53.487 "trtype": "TCP" 00:13:53.487 } 00:13:53.487 ], 00:13:53.487 "max_cntlid": 65519, 00:13:53.487 "max_namespaces": 32, 00:13:53.487 "min_cntlid": 1, 00:13:53.487 "model_number": "SPDK bdev Controller", 00:13:53.487 "namespaces": [ 00:13:53.487 { 00:13:53.487 "bdev_name": "Null1", 00:13:53.487 "name": "Null1", 00:13:53.487 "nguid": "F3BBAADAE7A74EE29E4E4483757938F9", 00:13:53.487 "nsid": 1, 00:13:53.487 "uuid": "f3bbaada-e7a7-4ee2-9e4e-4483757938f9" 00:13:53.487 } 00:13:53.487 ], 00:13:53.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.487 "serial_number": "SPDK00000000000001", 00:13:53.487 "subtype": "NVMe" 00:13:53.487 }, 00:13:53.487 { 00:13:53.487 "allow_any_host": true, 00:13:53.487 "hosts": [], 00:13:53.487 "listen_addresses": [ 00:13:53.487 { 00:13:53.487 "adrfam": "IPv4", 00:13:53.487 "traddr": "10.0.0.2", 00:13:53.487 "trsvcid": "4420", 00:13:53.487 "trtype": "TCP" 00:13:53.487 } 00:13:53.487 ], 00:13:53.487 "max_cntlid": 65519, 00:13:53.487 "max_namespaces": 32, 00:13:53.487 "min_cntlid": 1, 00:13:53.487 "model_number": "SPDK bdev Controller", 00:13:53.487 "namespaces": [ 00:13:53.487 { 00:13:53.487 "bdev_name": "Null2", 00:13:53.487 "name": "Null2", 00:13:53.487 "nguid": "735B0AD6621D464D8F9B5C81BD683DFE", 00:13:53.487 "nsid": 1, 00:13:53.487 "uuid": "735b0ad6-621d-464d-8f9b-5c81bd683dfe" 00:13:53.487 } 00:13:53.487 ], 00:13:53.487 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:53.487 "serial_number": "SPDK00000000000002", 00:13:53.487 "subtype": "NVMe" 00:13:53.487 }, 00:13:53.487 { 00:13:53.487 "allow_any_host": true, 00:13:53.487 "hosts": [], 00:13:53.487 "listen_addresses": [ 00:13:53.487 { 00:13:53.487 "adrfam": "IPv4", 00:13:53.487 "traddr": "10.0.0.2", 00:13:53.487 "trsvcid": "4420", 00:13:53.487 "trtype": "TCP" 00:13:53.487 } 00:13:53.487 ], 00:13:53.487 "max_cntlid": 65519, 00:13:53.487 "max_namespaces": 32, 00:13:53.487 "min_cntlid": 1, 00:13:53.487 "model_number": "SPDK bdev Controller", 00:13:53.487 "namespaces": [ 00:13:53.487 { 00:13:53.487 "bdev_name": "Null3", 00:13:53.487 "name": "Null3", 00:13:53.487 "nguid": "788D1F61B6D84820B8D2088CCB469B67", 00:13:53.487 "nsid": 1, 00:13:53.487 "uuid": "788d1f61-b6d8-4820-b8d2-088ccb469b67" 00:13:53.487 } 00:13:53.487 ], 00:13:53.487 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:53.487 "serial_number": "SPDK00000000000003", 00:13:53.487 "subtype": "NVMe" 00:13:53.487 }, 00:13:53.487 { 00:13:53.487 "allow_any_host": true, 00:13:53.487 "hosts": [], 00:13:53.487 "listen_addresses": [ 00:13:53.487 { 00:13:53.487 "adrfam": "IPv4", 00:13:53.487 "traddr": "10.0.0.2", 00:13:53.487 "trsvcid": "4420", 00:13:53.487 "trtype": "TCP" 00:13:53.487 } 00:13:53.487 ], 00:13:53.487 "max_cntlid": 65519, 00:13:53.487 "max_namespaces": 32, 00:13:53.487 "min_cntlid": 1, 00:13:53.487 "model_number": "SPDK bdev Controller", 00:13:53.487 "namespaces": [ 00:13:53.487 { 00:13:53.487 "bdev_name": "Null4", 00:13:53.487 "name": "Null4", 00:13:53.487 "nguid": "5DB54E1586914F2ABCB32563F99E767B", 00:13:53.487 "nsid": 1, 00:13:53.487 "uuid": "5db54e15-8691-4f2a-bcb3-2563f99e767b" 00:13:53.487 } 00:13:53.487 ], 00:13:53.487 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:53.487 "serial_number": "SPDK00000000000004", 00:13:53.487 "subtype": "NVMe" 00:13:53.487 } 00:13:53.487 ] 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:53.746 rmmod nvme_tcp 00:13:53.746 rmmod nvme_fabrics 00:13:53.746 rmmod nvme_keyring 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 65082 ']' 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 65082 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 65082 ']' 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 65082 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65082 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:53.746 killing process with pid 65082 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65082' 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 65082 00:13:53.746 [2024-05-15 02:14:41.615218] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:53.746 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 65082 00:13:54.005 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.005 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:54.005 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:54.005 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.005 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:54.005 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.005 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.005 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.005 02:14:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:54.005 00:13:54.005 real 0m1.593s 00:13:54.005 user 0m3.414s 00:13:54.005 sys 0m0.504s 00:13:54.005 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:54.005 ************************************ 00:13:54.005 02:14:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:54.005 END TEST nvmf_target_discovery 00:13:54.005 ************************************ 00:13:54.005 02:14:41 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:54.005 02:14:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:54.005 02:14:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:54.005 02:14:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:54.005 ************************************ 00:13:54.005 START TEST nvmf_referrals 00:13:54.005 ************************************ 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:54.005 * Looking for test storage... 00:13:54.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:54.005 02:14:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:54.005 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:54.263 Cannot find device "nvmf_tgt_br" 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:54.263 Cannot find device "nvmf_tgt_br2" 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:54.263 Cannot find device "nvmf_tgt_br" 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:54.263 Cannot find device "nvmf_tgt_br2" 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:54.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:54.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:54.263 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:54.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:13:54.522 00:13:54.522 --- 10.0.0.2 ping statistics --- 00:13:54.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.522 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:54.522 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:54.522 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:13:54.522 00:13:54.522 --- 10.0.0.3 ping statistics --- 00:13:54.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.522 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:54.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:13:54.522 00:13:54.522 --- 10.0.0.1 ping statistics --- 00:13:54.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.522 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=65281 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 65281 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 65281 ']' 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:54.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:54.522 02:14:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:54.522 [2024-05-15 02:14:42.460600] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:54.522 [2024-05-15 02:14:42.460722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.800 [2024-05-15 02:14:42.601177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:54.800 [2024-05-15 02:14:42.685997] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.800 [2024-05-15 02:14:42.686061] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.800 [2024-05-15 02:14:42.686073] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.800 [2024-05-15 02:14:42.686081] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.800 [2024-05-15 02:14:42.686089] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.800 [2024-05-15 02:14:42.686367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.800 [2024-05-15 02:14:42.686537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.800 [2024-05-15 02:14:42.686670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.800 [2024-05-15 02:14:42.686684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:55.748 [2024-05-15 02:14:43.539820] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:55.748 [2024-05-15 02:14:43.562721] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:55.748 [2024-05-15 02:14:43.563249] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:55.748 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:56.008 02:14:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.008 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:56.008 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:56.008 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:56.008 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.008 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:56.008 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:56.008 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:56.008 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.266 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:56.266 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:56.266 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:56.266 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:56.267 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.525 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:56.783 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.783 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:56.783 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:56.783 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.783 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:56.783 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.783 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:56.784 rmmod nvme_tcp 00:13:56.784 rmmod nvme_fabrics 00:13:56.784 rmmod nvme_keyring 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 65281 ']' 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 65281 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 65281 ']' 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 65281 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:56.784 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65281 00:13:57.043 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:57.043 killing process with pid 65281 00:13:57.043 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:57.043 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65281' 00:13:57.043 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 65281 00:13:57.043 [2024-05-15 02:14:44.807052] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:57.043 02:14:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 65281 00:13:57.043 02:14:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:57.043 02:14:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:57.043 02:14:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:57.043 02:14:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:57.043 02:14:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:57.043 02:14:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.043 02:14:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.043 02:14:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.043 02:14:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:57.043 00:13:57.043 real 0m3.149s 00:13:57.043 user 0m10.262s 00:13:57.043 sys 0m0.838s 00:13:57.043 02:14:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:57.043 02:14:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:57.043 ************************************ 00:13:57.043 END TEST nvmf_referrals 00:13:57.043 ************************************ 00:13:57.302 02:14:45 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:57.302 02:14:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:57.302 02:14:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:57.302 02:14:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:57.302 ************************************ 00:13:57.302 START TEST nvmf_connect_disconnect 00:13:57.302 ************************************ 00:13:57.302 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:57.302 * Looking for test storage... 00:13:57.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:57.302 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:57.302 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:57.302 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.302 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.302 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.302 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.302 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.302 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.302 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.302 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.302 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.302 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.302 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:13:57.302 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:57.303 Cannot find device "nvmf_tgt_br" 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:57.303 Cannot find device "nvmf_tgt_br2" 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:57.303 Cannot find device "nvmf_tgt_br" 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:57.303 Cannot find device "nvmf_tgt_br2" 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:57.303 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:57.303 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:57.303 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:57.562 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:57.562 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:57.562 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:57.562 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:57.562 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:57.562 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:57.562 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:57.562 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:57.562 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:57.562 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:57.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:57.563 00:13:57.563 --- 10.0.0.2 ping statistics --- 00:13:57.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.563 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:57.563 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:57.563 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:13:57.563 00:13:57.563 --- 10.0.0.3 ping statistics --- 00:13:57.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.563 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:57.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:13:57.563 00:13:57.563 --- 10.0.0.1 ping statistics --- 00:13:57.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.563 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=65569 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 65569 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 65569 ']' 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:57.563 02:14:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:57.821 [2024-05-15 02:14:45.633309] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:57.821 [2024-05-15 02:14:45.633454] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.821 [2024-05-15 02:14:45.780148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:58.080 [2024-05-15 02:14:45.853371] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.080 [2024-05-15 02:14:45.853484] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.080 [2024-05-15 02:14:45.853499] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.080 [2024-05-15 02:14:45.853509] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.080 [2024-05-15 02:14:45.853519] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.080 [2024-05-15 02:14:45.853648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.080 [2024-05-15 02:14:45.853736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.080 [2024-05-15 02:14:45.854238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.080 [2024-05-15 02:14:45.854256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:59.022 [2024-05-15 02:14:46.764571] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:59.022 [2024-05-15 02:14:46.826887] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:59.022 [2024-05-15 02:14:46.827248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:59.022 02:14:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:01.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:10.442 rmmod nvme_tcp 00:14:10.442 rmmod nvme_fabrics 00:14:10.442 rmmod nvme_keyring 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 65569 ']' 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 65569 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 65569 ']' 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 65569 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65569 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:10.442 killing process with pid 65569 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65569' 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 65569 00:14:10.442 [2024-05-15 02:14:58.128181] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 65569 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:10.442 00:14:10.442 real 0m13.271s 00:14:10.442 user 0m48.637s 00:14:10.442 sys 0m1.990s 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:10.442 02:14:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.442 ************************************ 00:14:10.442 END TEST nvmf_connect_disconnect 00:14:10.442 ************************************ 00:14:10.442 02:14:58 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:10.442 02:14:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:10.442 02:14:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:10.442 02:14:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:10.442 ************************************ 00:14:10.442 START TEST nvmf_multitarget 00:14:10.442 ************************************ 00:14:10.442 02:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:10.701 * Looking for test storage... 00:14:10.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:10.701 02:14:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:10.702 Cannot find device "nvmf_tgt_br" 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:10.702 Cannot find device "nvmf_tgt_br2" 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:10.702 Cannot find device "nvmf_tgt_br" 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:10.702 Cannot find device "nvmf_tgt_br2" 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:10.702 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:10.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:14:10.961 00:14:10.961 --- 10.0.0.2 ping statistics --- 00:14:10.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.961 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:10.961 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:10.961 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:14:10.961 00:14:10.961 --- 10.0.0.3 ping statistics --- 00:14:10.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.961 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:10.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:10.961 00:14:10.961 --- 10.0.0.1 ping statistics --- 00:14:10.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.961 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=65897 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 65897 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 65897 ']' 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:10.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:10.961 02:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:10.961 [2024-05-15 02:14:58.949953] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:14:10.961 [2024-05-15 02:14:58.950046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.219 [2024-05-15 02:14:59.082830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.219 [2024-05-15 02:14:59.169206] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.219 [2024-05-15 02:14:59.169293] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.219 [2024-05-15 02:14:59.169315] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.219 [2024-05-15 02:14:59.169330] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.219 [2024-05-15 02:14:59.169342] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.219 [2024-05-15 02:14:59.169713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.219 [2024-05-15 02:14:59.169810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.219 [2024-05-15 02:14:59.169878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.219 [2024-05-15 02:14:59.169896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.476 02:14:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:11.476 02:14:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:14:11.476 02:14:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.476 02:14:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:11.476 02:14:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:11.476 02:14:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.476 02:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:11.476 02:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:11.476 02:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:11.476 02:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:11.476 02:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:11.734 "nvmf_tgt_1" 00:14:11.734 02:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:11.734 "nvmf_tgt_2" 00:14:11.992 02:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:11.992 02:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:11.992 02:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:11.992 02:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:12.250 true 00:14:12.250 02:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:12.250 true 00:14:12.250 02:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:12.250 02:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.507 rmmod nvme_tcp 00:14:12.507 rmmod nvme_fabrics 00:14:12.507 rmmod nvme_keyring 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 65897 ']' 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 65897 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 65897 ']' 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 65897 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65897 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:12.507 killing process with pid 65897 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65897' 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 65897 00:14:12.507 02:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 65897 00:14:12.765 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:12.765 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:12.765 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:12.765 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.765 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:12.765 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.765 02:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.765 02:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.765 02:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:12.765 00:14:12.765 real 0m2.318s 00:14:12.765 user 0m7.091s 00:14:12.765 sys 0m0.614s 00:14:12.765 02:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:12.765 02:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:12.765 ************************************ 00:14:12.765 END TEST nvmf_multitarget 00:14:12.765 ************************************ 00:14:12.765 02:15:00 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:12.765 02:15:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:12.765 02:15:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:12.765 02:15:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:13.024 ************************************ 00:14:13.024 START TEST nvmf_rpc 00:14:13.024 ************************************ 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:13.024 * Looking for test storage... 00:14:13.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:13.024 Cannot find device "nvmf_tgt_br" 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:13.024 Cannot find device "nvmf_tgt_br2" 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:13.024 Cannot find device "nvmf_tgt_br" 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:13.024 Cannot find device "nvmf_tgt_br2" 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:13.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:13.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:13.024 02:15:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:13.024 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:13.024 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:13.024 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:13.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:14:13.283 00:14:13.283 --- 10.0.0.2 ping statistics --- 00:14:13.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.283 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:13.283 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:13.283 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:14:13.283 00:14:13.283 --- 10.0.0.3 ping statistics --- 00:14:13.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.283 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:13.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:14:13.283 00:14:13.283 --- 10.0.0.1 ping statistics --- 00:14:13.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.283 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=66103 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 66103 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 66103 ']' 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:13.283 02:15:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.283 [2024-05-15 02:15:01.256172] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:14:13.283 [2024-05-15 02:15:01.256267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.547 [2024-05-15 02:15:01.388631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:13.547 [2024-05-15 02:15:01.454226] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.547 [2024-05-15 02:15:01.454343] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.547 [2024-05-15 02:15:01.454357] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.547 [2024-05-15 02:15:01.454365] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.547 [2024-05-15 02:15:01.454372] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.547 [2024-05-15 02:15:01.454537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.547 [2024-05-15 02:15:01.454644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.547 [2024-05-15 02:15:01.455162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.547 [2024-05-15 02:15:01.455173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.508 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:14.508 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:14:14.508 02:15:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:14.508 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.508 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.508 02:15:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.508 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:14.508 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.508 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.508 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.508 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:14.508 "poll_groups": [ 00:14:14.508 { 00:14:14.508 "admin_qpairs": 0, 00:14:14.508 "completed_nvme_io": 0, 00:14:14.508 "current_admin_qpairs": 0, 00:14:14.508 "current_io_qpairs": 0, 00:14:14.508 "io_qpairs": 0, 00:14:14.508 "name": "nvmf_tgt_poll_group_000", 00:14:14.508 "pending_bdev_io": 0, 00:14:14.508 "transports": [] 00:14:14.508 }, 00:14:14.508 { 00:14:14.508 "admin_qpairs": 0, 00:14:14.508 "completed_nvme_io": 0, 00:14:14.508 "current_admin_qpairs": 0, 00:14:14.508 "current_io_qpairs": 0, 00:14:14.508 "io_qpairs": 0, 00:14:14.508 "name": "nvmf_tgt_poll_group_001", 00:14:14.508 "pending_bdev_io": 0, 00:14:14.508 "transports": [] 00:14:14.508 }, 00:14:14.508 { 00:14:14.508 "admin_qpairs": 0, 00:14:14.508 "completed_nvme_io": 0, 00:14:14.509 "current_admin_qpairs": 0, 00:14:14.509 "current_io_qpairs": 0, 00:14:14.509 "io_qpairs": 0, 00:14:14.509 "name": "nvmf_tgt_poll_group_002", 00:14:14.509 "pending_bdev_io": 0, 00:14:14.509 "transports": [] 00:14:14.509 }, 00:14:14.509 { 00:14:14.509 "admin_qpairs": 0, 00:14:14.509 "completed_nvme_io": 0, 00:14:14.509 "current_admin_qpairs": 0, 00:14:14.509 "current_io_qpairs": 0, 00:14:14.509 "io_qpairs": 0, 00:14:14.509 "name": "nvmf_tgt_poll_group_003", 00:14:14.509 "pending_bdev_io": 0, 00:14:14.509 "transports": [] 00:14:14.509 } 00:14:14.509 ], 00:14:14.509 "tick_rate": 2200000000 00:14:14.509 }' 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.509 [2024-05-15 02:15:02.421367] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:14.509 "poll_groups": [ 00:14:14.509 { 00:14:14.509 "admin_qpairs": 0, 00:14:14.509 "completed_nvme_io": 0, 00:14:14.509 "current_admin_qpairs": 0, 00:14:14.509 "current_io_qpairs": 0, 00:14:14.509 "io_qpairs": 0, 00:14:14.509 "name": "nvmf_tgt_poll_group_000", 00:14:14.509 "pending_bdev_io": 0, 00:14:14.509 "transports": [ 00:14:14.509 { 00:14:14.509 "trtype": "TCP" 00:14:14.509 } 00:14:14.509 ] 00:14:14.509 }, 00:14:14.509 { 00:14:14.509 "admin_qpairs": 0, 00:14:14.509 "completed_nvme_io": 0, 00:14:14.509 "current_admin_qpairs": 0, 00:14:14.509 "current_io_qpairs": 0, 00:14:14.509 "io_qpairs": 0, 00:14:14.509 "name": "nvmf_tgt_poll_group_001", 00:14:14.509 "pending_bdev_io": 0, 00:14:14.509 "transports": [ 00:14:14.509 { 00:14:14.509 "trtype": "TCP" 00:14:14.509 } 00:14:14.509 ] 00:14:14.509 }, 00:14:14.509 { 00:14:14.509 "admin_qpairs": 0, 00:14:14.509 "completed_nvme_io": 0, 00:14:14.509 "current_admin_qpairs": 0, 00:14:14.509 "current_io_qpairs": 0, 00:14:14.509 "io_qpairs": 0, 00:14:14.509 "name": "nvmf_tgt_poll_group_002", 00:14:14.509 "pending_bdev_io": 0, 00:14:14.509 "transports": [ 00:14:14.509 { 00:14:14.509 "trtype": "TCP" 00:14:14.509 } 00:14:14.509 ] 00:14:14.509 }, 00:14:14.509 { 00:14:14.509 "admin_qpairs": 0, 00:14:14.509 "completed_nvme_io": 0, 00:14:14.509 "current_admin_qpairs": 0, 00:14:14.509 "current_io_qpairs": 0, 00:14:14.509 "io_qpairs": 0, 00:14:14.509 "name": "nvmf_tgt_poll_group_003", 00:14:14.509 "pending_bdev_io": 0, 00:14:14.509 "transports": [ 00:14:14.509 { 00:14:14.509 "trtype": "TCP" 00:14:14.509 } 00:14:14.509 ] 00:14:14.509 } 00:14:14.509 ], 00:14:14.509 "tick_rate": 2200000000 00:14:14.509 }' 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:14.509 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:14.766 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:14.766 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:14.766 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:14.766 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:14.766 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:14.766 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:14.766 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:14.766 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:14.766 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:14.766 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.766 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.767 Malloc1 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.767 [2024-05-15 02:15:02.636668] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:14.767 [2024-05-15 02:15:02.636970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -a 10.0.0.2 -s 4420 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -a 10.0.0.2 -s 4420 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -a 10.0.0.2 -s 4420 00:14:14.767 [2024-05-15 02:15:02.659225] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d' 00:14:14.767 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:14.767 could not add new controller: failed to write to nvme-fabrics device 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.767 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:15.024 02:15:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:15.024 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:15.024 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:15.024 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:15.024 02:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:16.924 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:16.924 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:16.924 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.924 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:16.924 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.924 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:16.924 02:15:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:16.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.924 02:15:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:16.924 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:16.924 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:16.924 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:16.925 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:17.183 [2024-05-15 02:15:04.940631] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d' 00:14:17.183 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:17.183 could not add new controller: failed to write to nvme-fabrics device 00:14:17.183 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:17.183 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:17.183 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:17.183 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:17.183 02:15:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:17.183 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.183 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.183 02:15:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.183 02:15:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:17.183 02:15:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:17.183 02:15:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:17.183 02:15:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:17.183 02:15:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:17.183 02:15:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.711 [2024-05-15 02:15:07.231602] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:19.711 02:15:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.612 [2024-05-15 02:15:09.522531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.612 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.613 02:15:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:21.613 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.613 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.613 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.613 02:15:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.613 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.613 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.613 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.613 02:15:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:21.872 02:15:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:21.872 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:21.872 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:21.872 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:21.872 02:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:23.773 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:23.773 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:23.773 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.773 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:23.773 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.773 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:23.773 02:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.773 02:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:23.773 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:23.773 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.773 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:23.773 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.773 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.031 [2024-05-15 02:15:11.821677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.031 02:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:24.031 02:15:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:24.031 02:15:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:24.031 02:15:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:24.031 02:15:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:24.031 02:15:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:26.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.566 [2024-05-15 02:15:14.108867] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:26.566 02:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.491 02:15:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.492 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.492 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.492 [2024-05-15 02:15:16.412212] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.492 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.492 02:15:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:28.492 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.492 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.492 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.492 02:15:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:28.492 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.492 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.492 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.492 02:15:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:28.749 02:15:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:28.749 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:28.749 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.749 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:28.749 02:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:30.648 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:30.648 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:30.648 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.648 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:30.648 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.648 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:30.648 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.648 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:30.648 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:30.648 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:30.648 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.920 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 [2024-05-15 02:15:18.711294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 [2024-05-15 02:15:18.763526] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 [2024-05-15 02:15:18.815620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 [2024-05-15 02:15:18.871538] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.921 [2024-05-15 02:15:18.919524] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:31.187 "poll_groups": [ 00:14:31.187 { 00:14:31.187 "admin_qpairs": 2, 00:14:31.187 "completed_nvme_io": 66, 00:14:31.187 "current_admin_qpairs": 0, 00:14:31.187 "current_io_qpairs": 0, 00:14:31.187 "io_qpairs": 16, 00:14:31.187 "name": "nvmf_tgt_poll_group_000", 00:14:31.187 "pending_bdev_io": 0, 00:14:31.187 "transports": [ 00:14:31.187 { 00:14:31.187 "trtype": "TCP" 00:14:31.187 } 00:14:31.187 ] 00:14:31.187 }, 00:14:31.187 { 00:14:31.187 "admin_qpairs": 3, 00:14:31.187 "completed_nvme_io": 69, 00:14:31.187 "current_admin_qpairs": 0, 00:14:31.187 "current_io_qpairs": 0, 00:14:31.187 "io_qpairs": 17, 00:14:31.187 "name": "nvmf_tgt_poll_group_001", 00:14:31.187 "pending_bdev_io": 0, 00:14:31.187 "transports": [ 00:14:31.187 { 00:14:31.187 "trtype": "TCP" 00:14:31.187 } 00:14:31.187 ] 00:14:31.187 }, 00:14:31.187 { 00:14:31.187 "admin_qpairs": 1, 00:14:31.187 "completed_nvme_io": 119, 00:14:31.187 "current_admin_qpairs": 0, 00:14:31.187 "current_io_qpairs": 0, 00:14:31.187 "io_qpairs": 19, 00:14:31.187 "name": "nvmf_tgt_poll_group_002", 00:14:31.187 "pending_bdev_io": 0, 00:14:31.187 "transports": [ 00:14:31.187 { 00:14:31.187 "trtype": "TCP" 00:14:31.187 } 00:14:31.187 ] 00:14:31.187 }, 00:14:31.187 { 00:14:31.187 "admin_qpairs": 1, 00:14:31.187 "completed_nvme_io": 166, 00:14:31.187 "current_admin_qpairs": 0, 00:14:31.187 "current_io_qpairs": 0, 00:14:31.187 "io_qpairs": 18, 00:14:31.187 "name": "nvmf_tgt_poll_group_003", 00:14:31.187 "pending_bdev_io": 0, 00:14:31.187 "transports": [ 00:14:31.187 { 00:14:31.187 "trtype": "TCP" 00:14:31.187 } 00:14:31.187 ] 00:14:31.187 } 00:14:31.187 ], 00:14:31.187 "tick_rate": 2200000000 00:14:31.187 }' 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:31.187 02:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:31.187 rmmod nvme_tcp 00:14:31.187 rmmod nvme_fabrics 00:14:31.187 rmmod nvme_keyring 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 66103 ']' 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 66103 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 66103 ']' 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 66103 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66103 00:14:31.187 killing process with pid 66103 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66103' 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 66103 00:14:31.187 [2024-05-15 02:15:19.183258] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:31.187 02:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 66103 00:14:31.446 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.446 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:31.446 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:31.446 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.446 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.446 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.446 02:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.446 02:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.446 02:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:31.446 00:14:31.446 real 0m18.646s 00:14:31.446 user 1m10.239s 00:14:31.446 sys 0m2.588s 00:14:31.446 02:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:31.446 02:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.446 ************************************ 00:14:31.446 END TEST nvmf_rpc 00:14:31.446 ************************************ 00:14:31.706 02:15:19 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:31.706 02:15:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:31.706 02:15:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:31.706 02:15:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:31.706 ************************************ 00:14:31.706 START TEST nvmf_invalid 00:14:31.706 ************************************ 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:31.706 * Looking for test storage... 00:14:31.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:31.706 Cannot find device "nvmf_tgt_br" 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:31.706 Cannot find device "nvmf_tgt_br2" 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:31.706 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:31.707 Cannot find device "nvmf_tgt_br" 00:14:31.707 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:14:31.707 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:31.707 Cannot find device "nvmf_tgt_br2" 00:14:31.707 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:14:31.707 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:31.707 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:31.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:31.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:31.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:14:31.965 00:14:31.965 --- 10.0.0.2 ping statistics --- 00:14:31.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.965 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:31.965 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:31.965 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:31.965 00:14:31.965 --- 10.0.0.3 ping statistics --- 00:14:31.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.965 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:31.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:31.965 00:14:31.965 --- 10.0.0.1 ping statistics --- 00:14:31.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.965 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.965 02:15:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:31.966 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.966 02:15:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:31.966 02:15:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:31.966 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=66502 00:14:31.966 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:31.966 02:15:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 66502 00:14:31.966 02:15:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 66502 ']' 00:14:31.966 02:15:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.966 02:15:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:31.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.966 02:15:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.966 02:15:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:31.966 02:15:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:32.224 [2024-05-15 02:15:20.056657] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:14:32.224 [2024-05-15 02:15:20.057047] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.224 [2024-05-15 02:15:20.215546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:32.482 [2024-05-15 02:15:20.279955] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.482 [2024-05-15 02:15:20.280007] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.482 [2024-05-15 02:15:20.280019] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.482 [2024-05-15 02:15:20.280027] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.482 [2024-05-15 02:15:20.280035] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.482 [2024-05-15 02:15:20.280149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.482 [2024-05-15 02:15:20.280329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.482 [2024-05-15 02:15:20.281223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.482 [2024-05-15 02:15:20.281270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.049 02:15:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:33.049 02:15:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:14:33.049 02:15:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:33.049 02:15:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:33.049 02:15:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:33.049 02:15:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.049 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:33.049 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode26063 00:14:33.307 [2024-05-15 02:15:21.284314] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:33.307 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/05/15 02:15:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26063 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:14:33.307 request: 00:14:33.307 { 00:14:33.307 "method": "nvmf_create_subsystem", 00:14:33.307 "params": { 00:14:33.307 "nqn": "nqn.2016-06.io.spdk:cnode26063", 00:14:33.307 "tgt_name": "foobar" 00:14:33.307 } 00:14:33.307 } 00:14:33.307 Got JSON-RPC error response 00:14:33.307 GoRPCClient: error on JSON-RPC call' 00:14:33.307 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/05/15 02:15:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26063 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:14:33.307 request: 00:14:33.307 { 00:14:33.307 "method": "nvmf_create_subsystem", 00:14:33.307 "params": { 00:14:33.307 "nqn": "nqn.2016-06.io.spdk:cnode26063", 00:14:33.307 "tgt_name": "foobar" 00:14:33.307 } 00:14:33.307 } 00:14:33.307 Got JSON-RPC error response 00:14:33.307 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:33.307 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:33.307 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11334 00:14:33.566 [2024-05-15 02:15:21.520562] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11334: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:33.566 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/05/15 02:15:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11334 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:14:33.566 request: 00:14:33.566 { 00:14:33.566 "method": "nvmf_create_subsystem", 00:14:33.566 "params": { 00:14:33.566 "nqn": "nqn.2016-06.io.spdk:cnode11334", 00:14:33.566 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:14:33.566 } 00:14:33.566 } 00:14:33.566 Got JSON-RPC error response 00:14:33.566 GoRPCClient: error on JSON-RPC call' 00:14:33.566 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/05/15 02:15:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11334 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:14:33.566 request: 00:14:33.566 { 00:14:33.566 "method": "nvmf_create_subsystem", 00:14:33.566 "params": { 00:14:33.566 "nqn": "nqn.2016-06.io.spdk:cnode11334", 00:14:33.566 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:14:33.566 } 00:14:33.566 } 00:14:33.566 Got JSON-RPC error response 00:14:33.566 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:33.566 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:33.566 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7652 00:14:33.824 [2024-05-15 02:15:21.788764] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7652: invalid model number 'SPDK_Controller' 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/05/15 02:15:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode7652], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:14:33.824 request: 00:14:33.824 { 00:14:33.824 "method": "nvmf_create_subsystem", 00:14:33.824 "params": { 00:14:33.824 "nqn": "nqn.2016-06.io.spdk:cnode7652", 00:14:33.824 "model_number": "SPDK_Controller\u001f" 00:14:33.824 } 00:14:33.824 } 00:14:33.824 Got JSON-RPC error response 00:14:33.824 GoRPCClient: error on JSON-RPC call' 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/05/15 02:15:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode7652], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:14:33.824 request: 00:14:33.824 { 00:14:33.824 "method": "nvmf_create_subsystem", 00:14:33.824 "params": { 00:14:33.824 "nqn": "nqn.2016-06.io.spdk:cnode7652", 00:14:33.824 "model_number": "SPDK_Controller\u001f" 00:14:33.824 } 00:14:33.824 } 00:14:33.824 Got JSON-RPC error response 00:14:33.824 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:33.824 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.825 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.825 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:33.825 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:33.825 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:33.825 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.825 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:34.083 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ < == \- ]] 00:14:34.084 02:15:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ' /dev/null' 00:14:37.505 02:15:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.505 02:15:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:37.505 00:14:37.505 real 0m5.967s 00:14:37.505 user 0m23.797s 00:14:37.505 sys 0m1.292s 00:14:37.506 02:15:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:37.506 02:15:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:37.506 ************************************ 00:14:37.506 END TEST nvmf_invalid 00:14:37.506 ************************************ 00:14:37.506 02:15:25 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:37.506 02:15:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:37.506 02:15:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:37.506 02:15:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:37.506 ************************************ 00:14:37.506 START TEST nvmf_abort 00:14:37.506 ************************************ 00:14:37.506 02:15:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:37.765 * Looking for test storage... 00:14:37.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:37.765 Cannot find device "nvmf_tgt_br" 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.765 Cannot find device "nvmf_tgt_br2" 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:37.765 Cannot find device "nvmf_tgt_br" 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:37.765 Cannot find device "nvmf_tgt_br2" 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:37.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:37.765 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:38.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:14:38.025 00:14:38.025 --- 10.0.0.2 ping statistics --- 00:14:38.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.025 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:38.025 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:38.025 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:14:38.025 00:14:38.025 --- 10.0.0.3 ping statistics --- 00:14:38.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.025 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:38.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:14:38.025 00:14:38.025 --- 10.0.0.1 ping statistics --- 00:14:38.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.025 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=66976 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 66976 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 66976 ']' 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:38.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:38.025 02:15:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:38.025 [2024-05-15 02:15:26.019987] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:14:38.025 [2024-05-15 02:15:26.020094] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.285 [2024-05-15 02:15:26.159771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:38.285 [2024-05-15 02:15:26.232653] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.285 [2024-05-15 02:15:26.232944] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.285 [2024-05-15 02:15:26.233213] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.285 [2024-05-15 02:15:26.233412] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.285 [2024-05-15 02:15:26.233556] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.285 [2024-05-15 02:15:26.233695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.285 [2024-05-15 02:15:26.233852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.285 [2024-05-15 02:15:26.233861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:39.219 [2024-05-15 02:15:27.085661] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:39.219 Malloc0 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:39.219 Delay0 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:39.219 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.220 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:39.220 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.220 02:15:27 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:39.220 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.220 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:39.220 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.220 02:15:27 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:39.220 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.220 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:39.220 [2024-05-15 02:15:27.153592] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:39.220 [2024-05-15 02:15:27.153935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.220 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.220 02:15:27 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:39.220 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.220 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:39.220 02:15:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.220 02:15:27 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:39.478 [2024-05-15 02:15:27.333901] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:41.382 Initializing NVMe Controllers 00:14:41.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:41.382 controller IO queue size 128 less than required 00:14:41.382 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:41.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:41.382 Initialization complete. Launching workers. 00:14:41.382 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31422 00:14:41.382 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31483, failed to submit 62 00:14:41.382 success 31426, unsuccess 57, failed 0 00:14:41.382 02:15:29 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:41.382 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.382 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:41.382 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.382 02:15:29 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:41.382 02:15:29 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:14:41.382 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:41.382 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:41.640 rmmod nvme_tcp 00:14:41.640 rmmod nvme_fabrics 00:14:41.640 rmmod nvme_keyring 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 66976 ']' 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 66976 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 66976 ']' 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 66976 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66976 00:14:41.640 killing process with pid 66976 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66976' 00:14:41.640 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 66976 00:14:41.641 [2024-05-15 02:15:29.498227] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:41.641 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 66976 00:14:41.899 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:41.899 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:41.899 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:41.899 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.899 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:41.899 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.899 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.899 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.899 02:15:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:41.899 ************************************ 00:14:41.899 END TEST nvmf_abort 00:14:41.899 ************************************ 00:14:41.899 00:14:41.899 real 0m4.244s 00:14:41.899 user 0m12.351s 00:14:41.899 sys 0m0.940s 00:14:41.899 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:41.899 02:15:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:41.899 02:15:29 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:41.899 02:15:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:41.899 02:15:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:41.899 02:15:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:41.899 ************************************ 00:14:41.899 START TEST nvmf_ns_hotplug_stress 00:14:41.899 ************************************ 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:41.899 * Looking for test storage... 00:14:41.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:41.899 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:41.900 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:42.158 Cannot find device "nvmf_tgt_br" 00:14:42.158 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:14:42.158 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:42.158 Cannot find device "nvmf_tgt_br2" 00:14:42.158 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:14:42.158 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:42.158 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:42.158 Cannot find device "nvmf_tgt_br" 00:14:42.158 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:14:42.158 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:42.158 Cannot find device "nvmf_tgt_br2" 00:14:42.158 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:14:42.158 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:42.158 02:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:42.158 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:42.158 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:42.158 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:42.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:14:42.417 00:14:42.417 --- 10.0.0.2 ping statistics --- 00:14:42.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.417 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:42.417 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:42.417 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:14:42.417 00:14:42.417 --- 10.0.0.3 ping statistics --- 00:14:42.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.417 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:42.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:42.417 00:14:42.417 --- 10.0.0.1 ping statistics --- 00:14:42.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.417 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=67217 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 67217 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 67217 ']' 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:42.417 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.417 [2024-05-15 02:15:30.324361] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:14:42.417 [2024-05-15 02:15:30.324766] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.675 [2024-05-15 02:15:30.464996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:42.675 [2024-05-15 02:15:30.526800] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.675 [2024-05-15 02:15:30.527055] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.675 [2024-05-15 02:15:30.527191] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.675 [2024-05-15 02:15:30.527320] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.675 [2024-05-15 02:15:30.527356] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.675 [2024-05-15 02:15:30.527540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.675 [2024-05-15 02:15:30.527650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.675 [2024-05-15 02:15:30.527657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.675 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:42.675 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:14:42.675 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:42.675 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:42.675 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.675 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.675 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:42.675 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:42.933 [2024-05-15 02:15:30.913629] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.933 02:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:43.249 02:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.522 [2024-05-15 02:15:31.391545] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:43.522 [2024-05-15 02:15:31.392381] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.522 02:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.781 02:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:44.039 Malloc0 00:14:44.039 02:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:44.298 Delay0 00:14:44.298 02:15:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.558 02:15:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:44.816 NULL1 00:14:44.816 02:15:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:45.075 02:15:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=67317 00:14:45.075 02:15:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:45.075 02:15:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:14:45.075 02:15:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.452 Read completed with error (sct=0, sc=11) 00:14:46.452 02:15:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:46.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:46.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:46.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:46.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:46.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:46.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:46.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:46.711 02:15:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:46.711 02:15:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:46.970 true 00:14:46.970 02:15:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:14:46.970 02:15:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.537 02:15:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.104 02:15:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:48.104 02:15:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:48.104 true 00:14:48.104 02:15:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:14:48.104 02:15:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.363 02:15:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.621 02:15:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:48.621 02:15:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:48.880 true 00:14:48.880 02:15:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:14:48.880 02:15:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.138 02:15:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.704 02:15:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:49.704 02:15:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:49.704 true 00:14:49.704 02:15:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:14:49.704 02:15:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.638 02:15:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.897 02:15:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:50.897 02:15:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:51.156 true 00:14:51.156 02:15:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:14:51.156 02:15:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.722 02:15:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.980 02:15:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:51.980 02:15:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:52.239 true 00:14:52.239 02:15:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:14:52.239 02:15:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.497 02:15:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.755 02:15:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:52.755 02:15:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:53.014 true 00:14:53.014 02:15:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:14:53.014 02:15:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.273 02:15:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.840 02:15:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:53.840 02:15:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:53.840 true 00:14:53.840 02:15:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:14:53.840 02:15:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.098 02:15:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.665 02:15:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:54.665 02:15:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:54.665 true 00:14:54.665 02:15:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:14:54.665 02:15:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.612 02:15:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.873 02:15:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:55.873 02:15:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:56.439 true 00:14:56.439 02:15:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:14:56.439 02:15:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.698 02:15:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.957 02:15:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:56.957 02:15:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:57.215 true 00:14:57.215 02:15:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:14:57.215 02:15:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.474 02:15:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:57.732 02:15:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:57.732 02:15:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:57.990 true 00:14:57.990 02:15:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:14:57.990 02:15:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.558 02:15:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.816 02:15:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:58.816 02:15:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:59.074 true 00:14:59.074 02:15:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:14:59.074 02:15:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.332 02:15:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:59.601 02:15:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:59.601 02:15:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:59.860 true 00:14:59.860 02:15:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:14:59.860 02:15:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.796 02:15:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:01.054 02:15:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:01.054 02:15:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:01.311 true 00:15:01.311 02:15:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:01.311 02:15:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.568 02:15:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:01.826 02:15:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:01.826 02:15:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:02.085 true 00:15:02.085 02:15:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:02.085 02:15:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.344 02:15:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:02.911 02:15:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:02.911 02:15:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:02.911 true 00:15:02.911 02:15:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:02.911 02:15:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.218 02:15:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:03.476 02:15:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:03.476 02:15:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:03.735 true 00:15:03.735 02:15:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:03.735 02:15:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.670 02:15:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.928 02:15:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:04.928 02:15:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:05.186 true 00:15:05.186 02:15:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:05.186 02:15:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.750 02:15:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.750 02:15:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:05.750 02:15:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:06.006 true 00:15:06.006 02:15:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:06.006 02:15:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.264 02:15:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.522 02:15:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:06.522 02:15:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:07.089 true 00:15:07.089 02:15:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:07.089 02:15:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.655 02:15:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:07.913 02:15:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:07.913 02:15:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:08.171 true 00:15:08.171 02:15:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:08.171 02:15:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.429 02:15:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.691 02:15:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:08.691 02:15:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:08.949 true 00:15:08.949 02:15:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:08.949 02:15:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.208 02:15:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.466 02:15:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:09.466 02:15:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:09.724 true 00:15:09.724 02:15:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:09.724 02:15:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.658 02:15:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:10.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:10.916 02:15:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:10.916 02:15:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:11.174 true 00:15:11.174 02:15:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:11.174 02:15:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.433 02:15:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.691 02:15:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:11.691 02:15:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:11.949 true 00:15:11.949 02:15:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:11.949 02:15:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.906 02:16:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:13.165 02:16:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:13.165 02:16:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:13.434 true 00:15:13.434 02:16:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:13.434 02:16:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.699 02:16:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:13.957 02:16:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:13.957 02:16:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:14.221 true 00:15:14.221 02:16:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:14.221 02:16:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.498 02:16:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:14.755 02:16:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:15:14.755 02:16:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:15.013 true 00:15:15.013 02:16:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:15.013 02:16:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.947 Initializing NVMe Controllers 00:15:15.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:15.947 Controller IO queue size 128, less than required. 00:15:15.947 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:15.947 Controller IO queue size 128, less than required. 00:15:15.947 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:15.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:15.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:15.947 Initialization complete. Launching workers. 00:15:15.947 ======================================================== 00:15:15.947 Latency(us) 00:15:15.947 Device Information : IOPS MiB/s Average min max 00:15:15.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 349.33 0.17 133497.86 2665.01 1059194.79 00:15:15.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6706.50 3.27 19085.50 3626.84 780649.25 00:15:15.947 ======================================================== 00:15:15.947 Total : 7055.83 3.45 24750.04 2665.01 1059194.79 00:15:15.947 00:15:15.947 02:16:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:15.947 02:16:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:15:15.947 02:16:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:15:16.205 true 00:15:16.205 02:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67317 00:15:16.205 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (67317) - No such process 00:15:16.205 02:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 67317 00:15:16.205 02:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.463 02:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:16.722 02:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:16.722 02:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:16.722 02:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:16.722 02:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:16.722 02:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:17.288 null0 00:15:17.288 02:16:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:17.288 02:16:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:17.288 02:16:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:17.546 null1 00:15:17.546 02:16:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:17.546 02:16:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:17.546 02:16:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:17.804 null2 00:15:17.804 02:16:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:17.804 02:16:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:17.804 02:16:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:18.062 null3 00:15:18.062 02:16:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:18.063 02:16:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:18.063 02:16:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:18.321 null4 00:15:18.321 02:16:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:18.321 02:16:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:18.321 02:16:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:18.580 null5 00:15:18.580 02:16:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:18.580 02:16:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:18.580 02:16:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:18.838 null6 00:15:18.838 02:16:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:18.838 02:16:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:18.838 02:16:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:19.097 null7 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:19.097 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 68185 68186 68187 68191 68193 68195 68196 68197 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.098 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:19.356 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:19.615 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:19.615 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:19.615 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:19.615 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.615 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:19.615 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:19.615 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:19.873 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.873 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.873 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:19.873 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.873 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.873 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:19.873 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.873 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.873 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:19.873 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.873 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.873 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:19.873 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.873 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.873 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:19.874 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.874 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.874 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:19.874 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.874 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.874 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:19.874 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.874 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.874 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:20.132 02:16:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:20.132 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:20.132 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:20.132 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:20.132 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:20.390 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:20.390 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:20.390 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.390 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.390 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.390 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:20.390 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.390 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.390 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:20.390 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.390 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.390 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:20.647 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:20.906 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:20.906 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.906 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:20.906 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.906 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.906 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:20.906 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:20.906 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:21.165 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:21.165 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.165 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.165 02:16:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:21.165 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.165 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.165 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:21.165 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.165 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.165 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:21.165 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.165 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.165 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:21.165 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.165 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.165 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:21.424 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:21.424 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.424 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.424 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:21.424 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.424 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.424 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:21.424 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:21.424 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:21.424 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.682 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:21.940 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:21.940 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.940 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.940 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:21.940 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.940 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.940 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:21.940 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.940 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.941 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:21.941 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.941 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.941 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:22.197 02:16:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:22.197 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:22.197 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:22.197 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:22.197 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:22.197 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:22.197 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:22.197 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.455 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:22.455 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:22.455 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:22.455 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:22.455 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:22.455 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:22.455 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:22.455 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:22.455 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:22.455 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:22.455 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:22.455 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:22.713 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:22.713 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:22.713 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:22.713 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:22.713 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:22.713 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:22.713 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:22.713 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:22.713 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:22.713 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:22.714 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:22.714 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:22.714 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:22.714 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:22.714 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:22.972 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:22.972 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:22.972 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:22.972 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:22.972 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.972 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:22.972 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:22.972 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:22.972 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:22.972 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:23.230 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:23.230 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:23.230 02:16:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:23.230 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:23.230 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:23.230 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:23.230 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:23.230 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:23.230 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:23.230 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:23.230 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:23.230 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:23.230 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:23.230 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:23.230 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:23.230 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:23.488 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:23.488 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:23.488 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:23.488 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:23.488 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:23.488 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:23.488 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:23.488 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:23.488 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:23.488 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:23.805 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:24.063 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:24.063 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:24.063 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:24.063 02:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:24.063 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:24.063 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:24.063 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:24.063 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:24.063 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:24.063 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:24.063 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:24.063 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:24.323 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:24.323 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:24.323 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:24.323 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:24.323 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:24.323 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:24.323 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.581 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:24.581 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:24.581 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:24.581 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:24.582 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:24.582 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:24.582 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:24.582 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:24.582 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:24.582 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:24.582 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:24.839 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:24.839 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:24.839 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:24.839 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:24.840 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:24.840 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:24.840 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:24.840 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:24.840 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:24.840 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:24.840 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:24.840 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:24.840 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:24.840 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:25.098 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:25.098 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:25.098 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:25.098 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:25.098 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:25.098 02:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:25.098 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:25.098 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:25.098 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:25.098 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:25.098 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.356 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:25.356 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:25.356 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:25.356 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:25.356 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:25.356 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:25.356 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:25.356 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:25.356 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:25.356 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:25.356 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:25.356 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:25.615 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:25.615 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:25.615 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:25.615 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:25.615 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:25.615 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:25.615 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:25.615 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:25.615 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:25.615 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:25.615 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:25.873 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:25.873 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:25.873 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:25.873 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:25.873 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:25.873 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:25.873 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.873 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:25.873 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:26.131 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:26.131 02:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:26.131 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:26.132 rmmod nvme_tcp 00:15:26.132 rmmod nvme_fabrics 00:15:26.132 rmmod nvme_keyring 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 67217 ']' 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 67217 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 67217 ']' 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 67217 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:26.132 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67217 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:26.390 killing process with pid 67217 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67217' 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 67217 00:15:26.390 [2024-05-15 02:16:14.155588] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 67217 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:26.390 00:15:26.390 real 0m44.618s 00:15:26.390 user 3m43.130s 00:15:26.390 sys 0m13.453s 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:26.390 02:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.390 ************************************ 00:15:26.390 END TEST nvmf_ns_hotplug_stress 00:15:26.390 ************************************ 00:15:26.648 02:16:14 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:26.648 02:16:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:26.648 02:16:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:26.648 02:16:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:26.648 ************************************ 00:15:26.648 START TEST nvmf_connect_stress 00:15:26.648 ************************************ 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:26.648 * Looking for test storage... 00:15:26.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.648 02:16:14 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:26.649 Cannot find device "nvmf_tgt_br" 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:26.649 Cannot find device "nvmf_tgt_br2" 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:26.649 Cannot find device "nvmf_tgt_br" 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:26.649 Cannot find device "nvmf_tgt_br2" 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:26.649 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:26.649 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:15:26.649 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:26.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:15:26.908 00:15:26.908 --- 10.0.0.2 ping statistics --- 00:15:26.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.908 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:26.908 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:26.908 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:15:26.908 00:15:26.908 --- 10.0.0.3 ping statistics --- 00:15:26.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.908 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:26.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:26.908 00:15:26.908 --- 10.0.0.1 ping statistics --- 00:15:26.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.908 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=69472 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 69472 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 69472 ']' 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:26.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:26.908 02:16:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.908 [2024-05-15 02:16:14.913731] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:15:26.908 [2024-05-15 02:16:14.913836] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.167 [2024-05-15 02:16:15.055227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:27.167 [2024-05-15 02:16:15.122053] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.167 [2024-05-15 02:16:15.122108] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.167 [2024-05-15 02:16:15.122122] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.167 [2024-05-15 02:16:15.122133] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.167 [2024-05-15 02:16:15.122141] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.167 [2024-05-15 02:16:15.122854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.167 [2024-05-15 02:16:15.123042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:27.167 [2024-05-15 02:16:15.123046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.103 [2024-05-15 02:16:15.925726] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.103 [2024-05-15 02:16:15.947199] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:28.103 [2024-05-15 02:16:15.947579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.103 NULL1 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.103 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=69518 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.104 02:16:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.363 02:16:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.363 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:28.363 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.363 02:16:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.363 02:16:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.929 02:16:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.929 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:28.929 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.929 02:16:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.929 02:16:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.187 02:16:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.187 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:29.187 02:16:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.187 02:16:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.187 02:16:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.446 02:16:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.446 02:16:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:29.446 02:16:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.446 02:16:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.446 02:16:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.704 02:16:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.704 02:16:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:29.704 02:16:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.704 02:16:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.704 02:16:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.963 02:16:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.963 02:16:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:29.963 02:16:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.963 02:16:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.963 02:16:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.530 02:16:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.530 02:16:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:30.530 02:16:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.530 02:16:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.530 02:16:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.788 02:16:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.788 02:16:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:30.788 02:16:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.788 02:16:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.788 02:16:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.047 02:16:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.047 02:16:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:31.047 02:16:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.047 02:16:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.047 02:16:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.306 02:16:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.306 02:16:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:31.306 02:16:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.306 02:16:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.306 02:16:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.565 02:16:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.565 02:16:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:31.565 02:16:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.565 02:16:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.565 02:16:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.132 02:16:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.132 02:16:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:32.132 02:16:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.132 02:16:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.132 02:16:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.390 02:16:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.390 02:16:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:32.390 02:16:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.390 02:16:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.390 02:16:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.648 02:16:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.648 02:16:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:32.648 02:16:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.648 02:16:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.648 02:16:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.906 02:16:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.906 02:16:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:32.906 02:16:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.906 02:16:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.906 02:16:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.477 02:16:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.477 02:16:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:33.477 02:16:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.477 02:16:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.477 02:16:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.737 02:16:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.737 02:16:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:33.737 02:16:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.737 02:16:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.737 02:16:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.995 02:16:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.995 02:16:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:33.995 02:16:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.995 02:16:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.995 02:16:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.254 02:16:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.254 02:16:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:34.254 02:16:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.254 02:16:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.254 02:16:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.512 02:16:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.512 02:16:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:34.512 02:16:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.512 02:16:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.512 02:16:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.079 02:16:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.079 02:16:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:35.079 02:16:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.079 02:16:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.079 02:16:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.337 02:16:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.337 02:16:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:35.337 02:16:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.337 02:16:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.337 02:16:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.602 02:16:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.602 02:16:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:35.602 02:16:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.602 02:16:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.602 02:16:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.862 02:16:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.862 02:16:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:35.862 02:16:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.862 02:16:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.862 02:16:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.121 02:16:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.121 02:16:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:36.121 02:16:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.121 02:16:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.121 02:16:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.687 02:16:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.687 02:16:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:36.687 02:16:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.687 02:16:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.687 02:16:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.945 02:16:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.945 02:16:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:36.945 02:16:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.945 02:16:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.945 02:16:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.203 02:16:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.203 02:16:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:37.203 02:16:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.203 02:16:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.203 02:16:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.461 02:16:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.461 02:16:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:37.461 02:16:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.461 02:16:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.461 02:16:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.719 02:16:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.719 02:16:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:37.719 02:16:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.719 02:16:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.719 02:16:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.284 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.284 02:16:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:38.284 02:16:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.284 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.284 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.284 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 69518 00:15:38.543 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (69518) - No such process 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 69518 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.543 rmmod nvme_tcp 00:15:38.543 rmmod nvme_fabrics 00:15:38.543 rmmod nvme_keyring 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 69472 ']' 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 69472 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 69472 ']' 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 69472 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69472 00:15:38.543 killing process with pid 69472 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69472' 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 69472 00:15:38.543 [2024-05-15 02:16:26.416222] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:38.543 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 69472 00:15:38.801 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:38.801 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:38.801 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:38.801 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:38.801 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:38.801 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.801 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.801 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.801 02:16:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:38.801 00:15:38.801 real 0m12.217s 00:15:38.801 user 0m40.721s 00:15:38.801 sys 0m3.301s 00:15:38.801 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:38.801 02:16:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.801 ************************************ 00:15:38.801 END TEST nvmf_connect_stress 00:15:38.801 ************************************ 00:15:38.801 02:16:26 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:38.801 02:16:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:38.801 02:16:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:38.801 02:16:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:38.801 ************************************ 00:15:38.801 START TEST nvmf_fused_ordering 00:15:38.801 ************************************ 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:38.801 * Looking for test storage... 00:15:38.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.801 02:16:26 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:38.802 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:39.058 Cannot find device "nvmf_tgt_br" 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.058 Cannot find device "nvmf_tgt_br2" 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:39.058 Cannot find device "nvmf_tgt_br" 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:39.058 Cannot find device "nvmf_tgt_br2" 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:39.058 02:16:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:39.058 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:39.058 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:39.058 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:39.058 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:39.058 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:39.058 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:39.058 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:39.058 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:39.058 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:39.058 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:39.058 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:39.315 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:39.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:39.316 00:15:39.316 --- 10.0.0.2 ping statistics --- 00:15:39.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.316 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:39.316 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:39.316 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:15:39.316 00:15:39.316 --- 10.0.0.3 ping statistics --- 00:15:39.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.316 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:39.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:39.316 00:15:39.316 --- 10.0.0.1 ping statistics --- 00:15:39.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.316 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=69780 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 69780 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 69780 ']' 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:39.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:39.316 02:16:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:39.316 [2024-05-15 02:16:27.226572] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:15:39.316 [2024-05-15 02:16:27.226833] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.573 [2024-05-15 02:16:27.367629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.573 [2024-05-15 02:16:27.450379] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.573 [2024-05-15 02:16:27.450440] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.573 [2024-05-15 02:16:27.450464] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.573 [2024-05-15 02:16:27.450474] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.573 [2024-05-15 02:16:27.450481] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.573 [2024-05-15 02:16:27.450510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.505 [2024-05-15 02:16:28.349368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.505 [2024-05-15 02:16:28.365270] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:40.505 [2024-05-15 02:16:28.365643] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.505 NULL1 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:40.505 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.506 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.506 02:16:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.506 02:16:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:40.506 [2024-05-15 02:16:28.416618] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:15:40.506 [2024-05-15 02:16:28.416677] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69824 ] 00:15:41.071 Attached to nqn.2016-06.io.spdk:cnode1 00:15:41.071 Namespace ID: 1 size: 1GB 00:15:41.071 fused_ordering(0) 00:15:41.071 fused_ordering(1) 00:15:41.071 fused_ordering(2) 00:15:41.071 fused_ordering(3) 00:15:41.071 fused_ordering(4) 00:15:41.071 fused_ordering(5) 00:15:41.071 fused_ordering(6) 00:15:41.071 fused_ordering(7) 00:15:41.071 fused_ordering(8) 00:15:41.071 fused_ordering(9) 00:15:41.071 fused_ordering(10) 00:15:41.071 fused_ordering(11) 00:15:41.071 fused_ordering(12) 00:15:41.071 fused_ordering(13) 00:15:41.071 fused_ordering(14) 00:15:41.071 fused_ordering(15) 00:15:41.071 fused_ordering(16) 00:15:41.071 fused_ordering(17) 00:15:41.071 fused_ordering(18) 00:15:41.071 fused_ordering(19) 00:15:41.071 fused_ordering(20) 00:15:41.071 fused_ordering(21) 00:15:41.071 fused_ordering(22) 00:15:41.071 fused_ordering(23) 00:15:41.071 fused_ordering(24) 00:15:41.071 fused_ordering(25) 00:15:41.071 fused_ordering(26) 00:15:41.071 fused_ordering(27) 00:15:41.071 fused_ordering(28) 00:15:41.071 fused_ordering(29) 00:15:41.071 fused_ordering(30) 00:15:41.071 fused_ordering(31) 00:15:41.071 fused_ordering(32) 00:15:41.071 fused_ordering(33) 00:15:41.071 fused_ordering(34) 00:15:41.071 fused_ordering(35) 00:15:41.071 fused_ordering(36) 00:15:41.071 fused_ordering(37) 00:15:41.071 fused_ordering(38) 00:15:41.071 fused_ordering(39) 00:15:41.071 fused_ordering(40) 00:15:41.071 fused_ordering(41) 00:15:41.071 fused_ordering(42) 00:15:41.071 fused_ordering(43) 00:15:41.071 fused_ordering(44) 00:15:41.071 fused_ordering(45) 00:15:41.071 fused_ordering(46) 00:15:41.071 fused_ordering(47) 00:15:41.071 fused_ordering(48) 00:15:41.071 fused_ordering(49) 00:15:41.071 fused_ordering(50) 00:15:41.071 fused_ordering(51) 00:15:41.071 fused_ordering(52) 00:15:41.071 fused_ordering(53) 00:15:41.071 fused_ordering(54) 00:15:41.071 fused_ordering(55) 00:15:41.071 fused_ordering(56) 00:15:41.071 fused_ordering(57) 00:15:41.071 fused_ordering(58) 00:15:41.071 fused_ordering(59) 00:15:41.071 fused_ordering(60) 00:15:41.071 fused_ordering(61) 00:15:41.071 fused_ordering(62) 00:15:41.071 fused_ordering(63) 00:15:41.071 fused_ordering(64) 00:15:41.071 fused_ordering(65) 00:15:41.071 fused_ordering(66) 00:15:41.071 fused_ordering(67) 00:15:41.071 fused_ordering(68) 00:15:41.071 fused_ordering(69) 00:15:41.071 fused_ordering(70) 00:15:41.071 fused_ordering(71) 00:15:41.071 fused_ordering(72) 00:15:41.071 fused_ordering(73) 00:15:41.071 fused_ordering(74) 00:15:41.071 fused_ordering(75) 00:15:41.071 fused_ordering(76) 00:15:41.071 fused_ordering(77) 00:15:41.071 fused_ordering(78) 00:15:41.071 fused_ordering(79) 00:15:41.071 fused_ordering(80) 00:15:41.071 fused_ordering(81) 00:15:41.071 fused_ordering(82) 00:15:41.071 fused_ordering(83) 00:15:41.071 fused_ordering(84) 00:15:41.071 fused_ordering(85) 00:15:41.071 fused_ordering(86) 00:15:41.071 fused_ordering(87) 00:15:41.071 fused_ordering(88) 00:15:41.071 fused_ordering(89) 00:15:41.071 fused_ordering(90) 00:15:41.071 fused_ordering(91) 00:15:41.071 fused_ordering(92) 00:15:41.071 fused_ordering(93) 00:15:41.071 fused_ordering(94) 00:15:41.071 fused_ordering(95) 00:15:41.071 fused_ordering(96) 00:15:41.071 fused_ordering(97) 00:15:41.071 fused_ordering(98) 00:15:41.071 fused_ordering(99) 00:15:41.071 fused_ordering(100) 00:15:41.071 fused_ordering(101) 00:15:41.071 fused_ordering(102) 00:15:41.071 fused_ordering(103) 00:15:41.071 fused_ordering(104) 00:15:41.071 fused_ordering(105) 00:15:41.071 fused_ordering(106) 00:15:41.071 fused_ordering(107) 00:15:41.071 fused_ordering(108) 00:15:41.071 fused_ordering(109) 00:15:41.071 fused_ordering(110) 00:15:41.071 fused_ordering(111) 00:15:41.071 fused_ordering(112) 00:15:41.071 fused_ordering(113) 00:15:41.071 fused_ordering(114) 00:15:41.071 fused_ordering(115) 00:15:41.071 fused_ordering(116) 00:15:41.071 fused_ordering(117) 00:15:41.071 fused_ordering(118) 00:15:41.071 fused_ordering(119) 00:15:41.071 fused_ordering(120) 00:15:41.071 fused_ordering(121) 00:15:41.071 fused_ordering(122) 00:15:41.071 fused_ordering(123) 00:15:41.071 fused_ordering(124) 00:15:41.071 fused_ordering(125) 00:15:41.071 fused_ordering(126) 00:15:41.071 fused_ordering(127) 00:15:41.071 fused_ordering(128) 00:15:41.071 fused_ordering(129) 00:15:41.071 fused_ordering(130) 00:15:41.071 fused_ordering(131) 00:15:41.071 fused_ordering(132) 00:15:41.071 fused_ordering(133) 00:15:41.071 fused_ordering(134) 00:15:41.071 fused_ordering(135) 00:15:41.071 fused_ordering(136) 00:15:41.071 fused_ordering(137) 00:15:41.071 fused_ordering(138) 00:15:41.071 fused_ordering(139) 00:15:41.071 fused_ordering(140) 00:15:41.071 fused_ordering(141) 00:15:41.071 fused_ordering(142) 00:15:41.071 fused_ordering(143) 00:15:41.071 fused_ordering(144) 00:15:41.071 fused_ordering(145) 00:15:41.071 fused_ordering(146) 00:15:41.071 fused_ordering(147) 00:15:41.071 fused_ordering(148) 00:15:41.071 fused_ordering(149) 00:15:41.071 fused_ordering(150) 00:15:41.071 fused_ordering(151) 00:15:41.071 fused_ordering(152) 00:15:41.071 fused_ordering(153) 00:15:41.071 fused_ordering(154) 00:15:41.071 fused_ordering(155) 00:15:41.071 fused_ordering(156) 00:15:41.071 fused_ordering(157) 00:15:41.071 fused_ordering(158) 00:15:41.071 fused_ordering(159) 00:15:41.071 fused_ordering(160) 00:15:41.071 fused_ordering(161) 00:15:41.071 fused_ordering(162) 00:15:41.071 fused_ordering(163) 00:15:41.071 fused_ordering(164) 00:15:41.071 fused_ordering(165) 00:15:41.071 fused_ordering(166) 00:15:41.071 fused_ordering(167) 00:15:41.071 fused_ordering(168) 00:15:41.071 fused_ordering(169) 00:15:41.071 fused_ordering(170) 00:15:41.071 fused_ordering(171) 00:15:41.071 fused_ordering(172) 00:15:41.071 fused_ordering(173) 00:15:41.071 fused_ordering(174) 00:15:41.071 fused_ordering(175) 00:15:41.071 fused_ordering(176) 00:15:41.071 fused_ordering(177) 00:15:41.071 fused_ordering(178) 00:15:41.071 fused_ordering(179) 00:15:41.071 fused_ordering(180) 00:15:41.071 fused_ordering(181) 00:15:41.071 fused_ordering(182) 00:15:41.071 fused_ordering(183) 00:15:41.071 fused_ordering(184) 00:15:41.071 fused_ordering(185) 00:15:41.071 fused_ordering(186) 00:15:41.071 fused_ordering(187) 00:15:41.071 fused_ordering(188) 00:15:41.071 fused_ordering(189) 00:15:41.071 fused_ordering(190) 00:15:41.071 fused_ordering(191) 00:15:41.071 fused_ordering(192) 00:15:41.071 fused_ordering(193) 00:15:41.071 fused_ordering(194) 00:15:41.071 fused_ordering(195) 00:15:41.071 fused_ordering(196) 00:15:41.071 fused_ordering(197) 00:15:41.071 fused_ordering(198) 00:15:41.071 fused_ordering(199) 00:15:41.071 fused_ordering(200) 00:15:41.071 fused_ordering(201) 00:15:41.071 fused_ordering(202) 00:15:41.071 fused_ordering(203) 00:15:41.071 fused_ordering(204) 00:15:41.071 fused_ordering(205) 00:15:41.330 fused_ordering(206) 00:15:41.330 fused_ordering(207) 00:15:41.330 fused_ordering(208) 00:15:41.330 fused_ordering(209) 00:15:41.330 fused_ordering(210) 00:15:41.330 fused_ordering(211) 00:15:41.330 fused_ordering(212) 00:15:41.330 fused_ordering(213) 00:15:41.330 fused_ordering(214) 00:15:41.330 fused_ordering(215) 00:15:41.330 fused_ordering(216) 00:15:41.330 fused_ordering(217) 00:15:41.330 fused_ordering(218) 00:15:41.330 fused_ordering(219) 00:15:41.330 fused_ordering(220) 00:15:41.330 fused_ordering(221) 00:15:41.330 fused_ordering(222) 00:15:41.330 fused_ordering(223) 00:15:41.330 fused_ordering(224) 00:15:41.330 fused_ordering(225) 00:15:41.330 fused_ordering(226) 00:15:41.330 fused_ordering(227) 00:15:41.330 fused_ordering(228) 00:15:41.330 fused_ordering(229) 00:15:41.330 fused_ordering(230) 00:15:41.330 fused_ordering(231) 00:15:41.330 fused_ordering(232) 00:15:41.330 fused_ordering(233) 00:15:41.330 fused_ordering(234) 00:15:41.330 fused_ordering(235) 00:15:41.330 fused_ordering(236) 00:15:41.330 fused_ordering(237) 00:15:41.330 fused_ordering(238) 00:15:41.330 fused_ordering(239) 00:15:41.330 fused_ordering(240) 00:15:41.330 fused_ordering(241) 00:15:41.330 fused_ordering(242) 00:15:41.330 fused_ordering(243) 00:15:41.330 fused_ordering(244) 00:15:41.330 fused_ordering(245) 00:15:41.330 fused_ordering(246) 00:15:41.330 fused_ordering(247) 00:15:41.330 fused_ordering(248) 00:15:41.330 fused_ordering(249) 00:15:41.330 fused_ordering(250) 00:15:41.330 fused_ordering(251) 00:15:41.330 fused_ordering(252) 00:15:41.330 fused_ordering(253) 00:15:41.330 fused_ordering(254) 00:15:41.330 fused_ordering(255) 00:15:41.330 fused_ordering(256) 00:15:41.330 fused_ordering(257) 00:15:41.330 fused_ordering(258) 00:15:41.330 fused_ordering(259) 00:15:41.330 fused_ordering(260) 00:15:41.330 fused_ordering(261) 00:15:41.330 fused_ordering(262) 00:15:41.330 fused_ordering(263) 00:15:41.330 fused_ordering(264) 00:15:41.330 fused_ordering(265) 00:15:41.330 fused_ordering(266) 00:15:41.330 fused_ordering(267) 00:15:41.330 fused_ordering(268) 00:15:41.330 fused_ordering(269) 00:15:41.330 fused_ordering(270) 00:15:41.330 fused_ordering(271) 00:15:41.330 fused_ordering(272) 00:15:41.330 fused_ordering(273) 00:15:41.330 fused_ordering(274) 00:15:41.330 fused_ordering(275) 00:15:41.330 fused_ordering(276) 00:15:41.330 fused_ordering(277) 00:15:41.330 fused_ordering(278) 00:15:41.330 fused_ordering(279) 00:15:41.330 fused_ordering(280) 00:15:41.330 fused_ordering(281) 00:15:41.330 fused_ordering(282) 00:15:41.330 fused_ordering(283) 00:15:41.330 fused_ordering(284) 00:15:41.330 fused_ordering(285) 00:15:41.330 fused_ordering(286) 00:15:41.330 fused_ordering(287) 00:15:41.330 fused_ordering(288) 00:15:41.330 fused_ordering(289) 00:15:41.330 fused_ordering(290) 00:15:41.330 fused_ordering(291) 00:15:41.330 fused_ordering(292) 00:15:41.330 fused_ordering(293) 00:15:41.330 fused_ordering(294) 00:15:41.330 fused_ordering(295) 00:15:41.330 fused_ordering(296) 00:15:41.330 fused_ordering(297) 00:15:41.330 fused_ordering(298) 00:15:41.330 fused_ordering(299) 00:15:41.330 fused_ordering(300) 00:15:41.330 fused_ordering(301) 00:15:41.330 fused_ordering(302) 00:15:41.330 fused_ordering(303) 00:15:41.330 fused_ordering(304) 00:15:41.330 fused_ordering(305) 00:15:41.330 fused_ordering(306) 00:15:41.330 fused_ordering(307) 00:15:41.330 fused_ordering(308) 00:15:41.330 fused_ordering(309) 00:15:41.330 fused_ordering(310) 00:15:41.330 fused_ordering(311) 00:15:41.330 fused_ordering(312) 00:15:41.330 fused_ordering(313) 00:15:41.330 fused_ordering(314) 00:15:41.330 fused_ordering(315) 00:15:41.330 fused_ordering(316) 00:15:41.330 fused_ordering(317) 00:15:41.330 fused_ordering(318) 00:15:41.330 fused_ordering(319) 00:15:41.330 fused_ordering(320) 00:15:41.330 fused_ordering(321) 00:15:41.330 fused_ordering(322) 00:15:41.330 fused_ordering(323) 00:15:41.330 fused_ordering(324) 00:15:41.330 fused_ordering(325) 00:15:41.330 fused_ordering(326) 00:15:41.330 fused_ordering(327) 00:15:41.330 fused_ordering(328) 00:15:41.330 fused_ordering(329) 00:15:41.330 fused_ordering(330) 00:15:41.330 fused_ordering(331) 00:15:41.330 fused_ordering(332) 00:15:41.330 fused_ordering(333) 00:15:41.330 fused_ordering(334) 00:15:41.330 fused_ordering(335) 00:15:41.330 fused_ordering(336) 00:15:41.330 fused_ordering(337) 00:15:41.330 fused_ordering(338) 00:15:41.330 fused_ordering(339) 00:15:41.330 fused_ordering(340) 00:15:41.330 fused_ordering(341) 00:15:41.330 fused_ordering(342) 00:15:41.330 fused_ordering(343) 00:15:41.330 fused_ordering(344) 00:15:41.330 fused_ordering(345) 00:15:41.330 fused_ordering(346) 00:15:41.330 fused_ordering(347) 00:15:41.330 fused_ordering(348) 00:15:41.330 fused_ordering(349) 00:15:41.330 fused_ordering(350) 00:15:41.330 fused_ordering(351) 00:15:41.330 fused_ordering(352) 00:15:41.330 fused_ordering(353) 00:15:41.330 fused_ordering(354) 00:15:41.330 fused_ordering(355) 00:15:41.330 fused_ordering(356) 00:15:41.330 fused_ordering(357) 00:15:41.330 fused_ordering(358) 00:15:41.330 fused_ordering(359) 00:15:41.330 fused_ordering(360) 00:15:41.330 fused_ordering(361) 00:15:41.330 fused_ordering(362) 00:15:41.330 fused_ordering(363) 00:15:41.330 fused_ordering(364) 00:15:41.330 fused_ordering(365) 00:15:41.330 fused_ordering(366) 00:15:41.330 fused_ordering(367) 00:15:41.330 fused_ordering(368) 00:15:41.330 fused_ordering(369) 00:15:41.330 fused_ordering(370) 00:15:41.330 fused_ordering(371) 00:15:41.330 fused_ordering(372) 00:15:41.330 fused_ordering(373) 00:15:41.330 fused_ordering(374) 00:15:41.330 fused_ordering(375) 00:15:41.330 fused_ordering(376) 00:15:41.330 fused_ordering(377) 00:15:41.330 fused_ordering(378) 00:15:41.330 fused_ordering(379) 00:15:41.330 fused_ordering(380) 00:15:41.330 fused_ordering(381) 00:15:41.330 fused_ordering(382) 00:15:41.330 fused_ordering(383) 00:15:41.330 fused_ordering(384) 00:15:41.330 fused_ordering(385) 00:15:41.330 fused_ordering(386) 00:15:41.330 fused_ordering(387) 00:15:41.330 fused_ordering(388) 00:15:41.330 fused_ordering(389) 00:15:41.330 fused_ordering(390) 00:15:41.330 fused_ordering(391) 00:15:41.331 fused_ordering(392) 00:15:41.331 fused_ordering(393) 00:15:41.331 fused_ordering(394) 00:15:41.331 fused_ordering(395) 00:15:41.331 fused_ordering(396) 00:15:41.331 fused_ordering(397) 00:15:41.331 fused_ordering(398) 00:15:41.331 fused_ordering(399) 00:15:41.331 fused_ordering(400) 00:15:41.331 fused_ordering(401) 00:15:41.331 fused_ordering(402) 00:15:41.331 fused_ordering(403) 00:15:41.331 fused_ordering(404) 00:15:41.331 fused_ordering(405) 00:15:41.331 fused_ordering(406) 00:15:41.331 fused_ordering(407) 00:15:41.331 fused_ordering(408) 00:15:41.331 fused_ordering(409) 00:15:41.331 fused_ordering(410) 00:15:41.896 fused_ordering(411) 00:15:41.896 fused_ordering(412) 00:15:41.896 fused_ordering(413) 00:15:41.896 fused_ordering(414) 00:15:41.896 fused_ordering(415) 00:15:41.896 fused_ordering(416) 00:15:41.896 fused_ordering(417) 00:15:41.896 fused_ordering(418) 00:15:41.896 fused_ordering(419) 00:15:41.896 fused_ordering(420) 00:15:41.896 fused_ordering(421) 00:15:41.896 fused_ordering(422) 00:15:41.896 fused_ordering(423) 00:15:41.896 fused_ordering(424) 00:15:41.896 fused_ordering(425) 00:15:41.896 fused_ordering(426) 00:15:41.896 fused_ordering(427) 00:15:41.896 fused_ordering(428) 00:15:41.896 fused_ordering(429) 00:15:41.896 fused_ordering(430) 00:15:41.896 fused_ordering(431) 00:15:41.896 fused_ordering(432) 00:15:41.896 fused_ordering(433) 00:15:41.896 fused_ordering(434) 00:15:41.896 fused_ordering(435) 00:15:41.896 fused_ordering(436) 00:15:41.896 fused_ordering(437) 00:15:41.896 fused_ordering(438) 00:15:41.896 fused_ordering(439) 00:15:41.896 fused_ordering(440) 00:15:41.896 fused_ordering(441) 00:15:41.896 fused_ordering(442) 00:15:41.896 fused_ordering(443) 00:15:41.896 fused_ordering(444) 00:15:41.896 fused_ordering(445) 00:15:41.896 fused_ordering(446) 00:15:41.896 fused_ordering(447) 00:15:41.896 fused_ordering(448) 00:15:41.896 fused_ordering(449) 00:15:41.896 fused_ordering(450) 00:15:41.896 fused_ordering(451) 00:15:41.896 fused_ordering(452) 00:15:41.896 fused_ordering(453) 00:15:41.896 fused_ordering(454) 00:15:41.896 fused_ordering(455) 00:15:41.896 fused_ordering(456) 00:15:41.896 fused_ordering(457) 00:15:41.896 fused_ordering(458) 00:15:41.896 fused_ordering(459) 00:15:41.896 fused_ordering(460) 00:15:41.896 fused_ordering(461) 00:15:41.896 fused_ordering(462) 00:15:41.896 fused_ordering(463) 00:15:41.896 fused_ordering(464) 00:15:41.896 fused_ordering(465) 00:15:41.896 fused_ordering(466) 00:15:41.896 fused_ordering(467) 00:15:41.896 fused_ordering(468) 00:15:41.896 fused_ordering(469) 00:15:41.896 fused_ordering(470) 00:15:41.896 fused_ordering(471) 00:15:41.896 fused_ordering(472) 00:15:41.896 fused_ordering(473) 00:15:41.896 fused_ordering(474) 00:15:41.896 fused_ordering(475) 00:15:41.896 fused_ordering(476) 00:15:41.896 fused_ordering(477) 00:15:41.896 fused_ordering(478) 00:15:41.896 fused_ordering(479) 00:15:41.896 fused_ordering(480) 00:15:41.896 fused_ordering(481) 00:15:41.896 fused_ordering(482) 00:15:41.896 fused_ordering(483) 00:15:41.896 fused_ordering(484) 00:15:41.896 fused_ordering(485) 00:15:41.896 fused_ordering(486) 00:15:41.896 fused_ordering(487) 00:15:41.896 fused_ordering(488) 00:15:41.896 fused_ordering(489) 00:15:41.896 fused_ordering(490) 00:15:41.896 fused_ordering(491) 00:15:41.896 fused_ordering(492) 00:15:41.896 fused_ordering(493) 00:15:41.896 fused_ordering(494) 00:15:41.896 fused_ordering(495) 00:15:41.896 fused_ordering(496) 00:15:41.896 fused_ordering(497) 00:15:41.896 fused_ordering(498) 00:15:41.896 fused_ordering(499) 00:15:41.896 fused_ordering(500) 00:15:41.896 fused_ordering(501) 00:15:41.896 fused_ordering(502) 00:15:41.896 fused_ordering(503) 00:15:41.897 fused_ordering(504) 00:15:41.897 fused_ordering(505) 00:15:41.897 fused_ordering(506) 00:15:41.897 fused_ordering(507) 00:15:41.897 fused_ordering(508) 00:15:41.897 fused_ordering(509) 00:15:41.897 fused_ordering(510) 00:15:41.897 fused_ordering(511) 00:15:41.897 fused_ordering(512) 00:15:41.897 fused_ordering(513) 00:15:41.897 fused_ordering(514) 00:15:41.897 fused_ordering(515) 00:15:41.897 fused_ordering(516) 00:15:41.897 fused_ordering(517) 00:15:41.897 fused_ordering(518) 00:15:41.897 fused_ordering(519) 00:15:41.897 fused_ordering(520) 00:15:41.897 fused_ordering(521) 00:15:41.897 fused_ordering(522) 00:15:41.897 fused_ordering(523) 00:15:41.897 fused_ordering(524) 00:15:41.897 fused_ordering(525) 00:15:41.897 fused_ordering(526) 00:15:41.897 fused_ordering(527) 00:15:41.897 fused_ordering(528) 00:15:41.897 fused_ordering(529) 00:15:41.897 fused_ordering(530) 00:15:41.897 fused_ordering(531) 00:15:41.897 fused_ordering(532) 00:15:41.897 fused_ordering(533) 00:15:41.897 fused_ordering(534) 00:15:41.897 fused_ordering(535) 00:15:41.897 fused_ordering(536) 00:15:41.897 fused_ordering(537) 00:15:41.897 fused_ordering(538) 00:15:41.897 fused_ordering(539) 00:15:41.897 fused_ordering(540) 00:15:41.897 fused_ordering(541) 00:15:41.897 fused_ordering(542) 00:15:41.897 fused_ordering(543) 00:15:41.897 fused_ordering(544) 00:15:41.897 fused_ordering(545) 00:15:41.897 fused_ordering(546) 00:15:41.897 fused_ordering(547) 00:15:41.897 fused_ordering(548) 00:15:41.897 fused_ordering(549) 00:15:41.897 fused_ordering(550) 00:15:41.897 fused_ordering(551) 00:15:41.897 fused_ordering(552) 00:15:41.897 fused_ordering(553) 00:15:41.897 fused_ordering(554) 00:15:41.897 fused_ordering(555) 00:15:41.897 fused_ordering(556) 00:15:41.897 fused_ordering(557) 00:15:41.897 fused_ordering(558) 00:15:41.897 fused_ordering(559) 00:15:41.897 fused_ordering(560) 00:15:41.897 fused_ordering(561) 00:15:41.897 fused_ordering(562) 00:15:41.897 fused_ordering(563) 00:15:41.897 fused_ordering(564) 00:15:41.897 fused_ordering(565) 00:15:41.897 fused_ordering(566) 00:15:41.897 fused_ordering(567) 00:15:41.897 fused_ordering(568) 00:15:41.897 fused_ordering(569) 00:15:41.897 fused_ordering(570) 00:15:41.897 fused_ordering(571) 00:15:41.897 fused_ordering(572) 00:15:41.897 fused_ordering(573) 00:15:41.897 fused_ordering(574) 00:15:41.897 fused_ordering(575) 00:15:41.897 fused_ordering(576) 00:15:41.897 fused_ordering(577) 00:15:41.897 fused_ordering(578) 00:15:41.897 fused_ordering(579) 00:15:41.897 fused_ordering(580) 00:15:41.897 fused_ordering(581) 00:15:41.897 fused_ordering(582) 00:15:41.897 fused_ordering(583) 00:15:41.897 fused_ordering(584) 00:15:41.897 fused_ordering(585) 00:15:41.897 fused_ordering(586) 00:15:41.897 fused_ordering(587) 00:15:41.897 fused_ordering(588) 00:15:41.897 fused_ordering(589) 00:15:41.897 fused_ordering(590) 00:15:41.897 fused_ordering(591) 00:15:41.897 fused_ordering(592) 00:15:41.897 fused_ordering(593) 00:15:41.897 fused_ordering(594) 00:15:41.897 fused_ordering(595) 00:15:41.897 fused_ordering(596) 00:15:41.897 fused_ordering(597) 00:15:41.897 fused_ordering(598) 00:15:41.897 fused_ordering(599) 00:15:41.897 fused_ordering(600) 00:15:41.897 fused_ordering(601) 00:15:41.897 fused_ordering(602) 00:15:41.897 fused_ordering(603) 00:15:41.897 fused_ordering(604) 00:15:41.897 fused_ordering(605) 00:15:41.897 fused_ordering(606) 00:15:41.897 fused_ordering(607) 00:15:41.897 fused_ordering(608) 00:15:41.897 fused_ordering(609) 00:15:41.897 fused_ordering(610) 00:15:41.897 fused_ordering(611) 00:15:41.897 fused_ordering(612) 00:15:41.897 fused_ordering(613) 00:15:41.897 fused_ordering(614) 00:15:41.897 fused_ordering(615) 00:15:42.463 fused_ordering(616) 00:15:42.463 fused_ordering(617) 00:15:42.463 fused_ordering(618) 00:15:42.463 fused_ordering(619) 00:15:42.463 fused_ordering(620) 00:15:42.464 fused_ordering(621) 00:15:42.464 fused_ordering(622) 00:15:42.464 fused_ordering(623) 00:15:42.464 fused_ordering(624) 00:15:42.464 fused_ordering(625) 00:15:42.464 fused_ordering(626) 00:15:42.464 fused_ordering(627) 00:15:42.464 fused_ordering(628) 00:15:42.464 fused_ordering(629) 00:15:42.464 fused_ordering(630) 00:15:42.464 fused_ordering(631) 00:15:42.464 fused_ordering(632) 00:15:42.464 fused_ordering(633) 00:15:42.464 fused_ordering(634) 00:15:42.464 fused_ordering(635) 00:15:42.464 fused_ordering(636) 00:15:42.464 fused_ordering(637) 00:15:42.464 fused_ordering(638) 00:15:42.464 fused_ordering(639) 00:15:42.464 fused_ordering(640) 00:15:42.464 fused_ordering(641) 00:15:42.464 fused_ordering(642) 00:15:42.464 fused_ordering(643) 00:15:42.464 fused_ordering(644) 00:15:42.464 fused_ordering(645) 00:15:42.464 fused_ordering(646) 00:15:42.464 fused_ordering(647) 00:15:42.464 fused_ordering(648) 00:15:42.464 fused_ordering(649) 00:15:42.464 fused_ordering(650) 00:15:42.464 fused_ordering(651) 00:15:42.464 fused_ordering(652) 00:15:42.464 fused_ordering(653) 00:15:42.464 fused_ordering(654) 00:15:42.464 fused_ordering(655) 00:15:42.464 fused_ordering(656) 00:15:42.464 fused_ordering(657) 00:15:42.464 fused_ordering(658) 00:15:42.464 fused_ordering(659) 00:15:42.464 fused_ordering(660) 00:15:42.464 fused_ordering(661) 00:15:42.464 fused_ordering(662) 00:15:42.464 fused_ordering(663) 00:15:42.464 fused_ordering(664) 00:15:42.464 fused_ordering(665) 00:15:42.464 fused_ordering(666) 00:15:42.464 fused_ordering(667) 00:15:42.464 fused_ordering(668) 00:15:42.464 fused_ordering(669) 00:15:42.464 fused_ordering(670) 00:15:42.464 fused_ordering(671) 00:15:42.464 fused_ordering(672) 00:15:42.464 fused_ordering(673) 00:15:42.464 fused_ordering(674) 00:15:42.464 fused_ordering(675) 00:15:42.464 fused_ordering(676) 00:15:42.464 fused_ordering(677) 00:15:42.464 fused_ordering(678) 00:15:42.464 fused_ordering(679) 00:15:42.464 fused_ordering(680) 00:15:42.464 fused_ordering(681) 00:15:42.464 fused_ordering(682) 00:15:42.464 fused_ordering(683) 00:15:42.464 fused_ordering(684) 00:15:42.464 fused_ordering(685) 00:15:42.464 fused_ordering(686) 00:15:42.464 fused_ordering(687) 00:15:42.464 fused_ordering(688) 00:15:42.464 fused_ordering(689) 00:15:42.464 fused_ordering(690) 00:15:42.464 fused_ordering(691) 00:15:42.464 fused_ordering(692) 00:15:42.464 fused_ordering(693) 00:15:42.464 fused_ordering(694) 00:15:42.464 fused_ordering(695) 00:15:42.464 fused_ordering(696) 00:15:42.464 fused_ordering(697) 00:15:42.464 fused_ordering(698) 00:15:42.464 fused_ordering(699) 00:15:42.464 fused_ordering(700) 00:15:42.464 fused_ordering(701) 00:15:42.464 fused_ordering(702) 00:15:42.464 fused_ordering(703) 00:15:42.464 fused_ordering(704) 00:15:42.464 fused_ordering(705) 00:15:42.464 fused_ordering(706) 00:15:42.464 fused_ordering(707) 00:15:42.464 fused_ordering(708) 00:15:42.464 fused_ordering(709) 00:15:42.464 fused_ordering(710) 00:15:42.464 fused_ordering(711) 00:15:42.464 fused_ordering(712) 00:15:42.464 fused_ordering(713) 00:15:42.464 fused_ordering(714) 00:15:42.464 fused_ordering(715) 00:15:42.464 fused_ordering(716) 00:15:42.464 fused_ordering(717) 00:15:42.464 fused_ordering(718) 00:15:42.464 fused_ordering(719) 00:15:42.464 fused_ordering(720) 00:15:42.464 fused_ordering(721) 00:15:42.464 fused_ordering(722) 00:15:42.464 fused_ordering(723) 00:15:42.464 fused_ordering(724) 00:15:42.464 fused_ordering(725) 00:15:42.464 fused_ordering(726) 00:15:42.464 fused_ordering(727) 00:15:42.464 fused_ordering(728) 00:15:42.464 fused_ordering(729) 00:15:42.464 fused_ordering(730) 00:15:42.464 fused_ordering(731) 00:15:42.464 fused_ordering(732) 00:15:42.464 fused_ordering(733) 00:15:42.464 fused_ordering(734) 00:15:42.464 fused_ordering(735) 00:15:42.464 fused_ordering(736) 00:15:42.464 fused_ordering(737) 00:15:42.464 fused_ordering(738) 00:15:42.464 fused_ordering(739) 00:15:42.464 fused_ordering(740) 00:15:42.464 fused_ordering(741) 00:15:42.464 fused_ordering(742) 00:15:42.464 fused_ordering(743) 00:15:42.464 fused_ordering(744) 00:15:42.464 fused_ordering(745) 00:15:42.464 fused_ordering(746) 00:15:42.464 fused_ordering(747) 00:15:42.464 fused_ordering(748) 00:15:42.464 fused_ordering(749) 00:15:42.464 fused_ordering(750) 00:15:42.464 fused_ordering(751) 00:15:42.464 fused_ordering(752) 00:15:42.464 fused_ordering(753) 00:15:42.464 fused_ordering(754) 00:15:42.464 fused_ordering(755) 00:15:42.464 fused_ordering(756) 00:15:42.464 fused_ordering(757) 00:15:42.464 fused_ordering(758) 00:15:42.464 fused_ordering(759) 00:15:42.464 fused_ordering(760) 00:15:42.464 fused_ordering(761) 00:15:42.464 fused_ordering(762) 00:15:42.464 fused_ordering(763) 00:15:42.464 fused_ordering(764) 00:15:42.464 fused_ordering(765) 00:15:42.464 fused_ordering(766) 00:15:42.464 fused_ordering(767) 00:15:42.464 fused_ordering(768) 00:15:42.464 fused_ordering(769) 00:15:42.464 fused_ordering(770) 00:15:42.464 fused_ordering(771) 00:15:42.464 fused_ordering(772) 00:15:42.464 fused_ordering(773) 00:15:42.464 fused_ordering(774) 00:15:42.464 fused_ordering(775) 00:15:42.464 fused_ordering(776) 00:15:42.464 fused_ordering(777) 00:15:42.464 fused_ordering(778) 00:15:42.464 fused_ordering(779) 00:15:42.464 fused_ordering(780) 00:15:42.464 fused_ordering(781) 00:15:42.464 fused_ordering(782) 00:15:42.464 fused_ordering(783) 00:15:42.464 fused_ordering(784) 00:15:42.464 fused_ordering(785) 00:15:42.464 fused_ordering(786) 00:15:42.464 fused_ordering(787) 00:15:42.464 fused_ordering(788) 00:15:42.464 fused_ordering(789) 00:15:42.464 fused_ordering(790) 00:15:42.464 fused_ordering(791) 00:15:42.464 fused_ordering(792) 00:15:42.464 fused_ordering(793) 00:15:42.464 fused_ordering(794) 00:15:42.464 fused_ordering(795) 00:15:42.464 fused_ordering(796) 00:15:42.465 fused_ordering(797) 00:15:42.465 fused_ordering(798) 00:15:42.465 fused_ordering(799) 00:15:42.465 fused_ordering(800) 00:15:42.465 fused_ordering(801) 00:15:42.465 fused_ordering(802) 00:15:42.465 fused_ordering(803) 00:15:42.465 fused_ordering(804) 00:15:42.465 fused_ordering(805) 00:15:42.465 fused_ordering(806) 00:15:42.465 fused_ordering(807) 00:15:42.465 fused_ordering(808) 00:15:42.465 fused_ordering(809) 00:15:42.465 fused_ordering(810) 00:15:42.465 fused_ordering(811) 00:15:42.465 fused_ordering(812) 00:15:42.465 fused_ordering(813) 00:15:42.465 fused_ordering(814) 00:15:42.465 fused_ordering(815) 00:15:42.465 fused_ordering(816) 00:15:42.465 fused_ordering(817) 00:15:42.465 fused_ordering(818) 00:15:42.465 fused_ordering(819) 00:15:42.465 fused_ordering(820) 00:15:43.030 fused_ordering(821) 00:15:43.030 fused_ordering(822) 00:15:43.030 fused_ordering(823) 00:15:43.030 fused_ordering(824) 00:15:43.030 fused_ordering(825) 00:15:43.030 fused_ordering(826) 00:15:43.030 fused_ordering(827) 00:15:43.030 fused_ordering(828) 00:15:43.030 fused_ordering(829) 00:15:43.030 fused_ordering(830) 00:15:43.030 fused_ordering(831) 00:15:43.030 fused_ordering(832) 00:15:43.030 fused_ordering(833) 00:15:43.030 fused_ordering(834) 00:15:43.030 fused_ordering(835) 00:15:43.030 fused_ordering(836) 00:15:43.030 fused_ordering(837) 00:15:43.030 fused_ordering(838) 00:15:43.030 fused_ordering(839) 00:15:43.030 fused_ordering(840) 00:15:43.030 fused_ordering(841) 00:15:43.030 fused_ordering(842) 00:15:43.030 fused_ordering(843) 00:15:43.030 fused_ordering(844) 00:15:43.030 fused_ordering(845) 00:15:43.030 fused_ordering(846) 00:15:43.030 fused_ordering(847) 00:15:43.030 fused_ordering(848) 00:15:43.030 fused_ordering(849) 00:15:43.030 fused_ordering(850) 00:15:43.030 fused_ordering(851) 00:15:43.030 fused_ordering(852) 00:15:43.030 fused_ordering(853) 00:15:43.030 fused_ordering(854) 00:15:43.030 fused_ordering(855) 00:15:43.030 fused_ordering(856) 00:15:43.030 fused_ordering(857) 00:15:43.030 fused_ordering(858) 00:15:43.030 fused_ordering(859) 00:15:43.030 fused_ordering(860) 00:15:43.030 fused_ordering(861) 00:15:43.030 fused_ordering(862) 00:15:43.030 fused_ordering(863) 00:15:43.030 fused_ordering(864) 00:15:43.030 fused_ordering(865) 00:15:43.030 fused_ordering(866) 00:15:43.030 fused_ordering(867) 00:15:43.030 fused_ordering(868) 00:15:43.030 fused_ordering(869) 00:15:43.030 fused_ordering(870) 00:15:43.030 fused_ordering(871) 00:15:43.030 fused_ordering(872) 00:15:43.030 fused_ordering(873) 00:15:43.030 fused_ordering(874) 00:15:43.030 fused_ordering(875) 00:15:43.030 fused_ordering(876) 00:15:43.030 fused_ordering(877) 00:15:43.030 fused_ordering(878) 00:15:43.030 fused_ordering(879) 00:15:43.030 fused_ordering(880) 00:15:43.030 fused_ordering(881) 00:15:43.030 fused_ordering(882) 00:15:43.030 fused_ordering(883) 00:15:43.030 fused_ordering(884) 00:15:43.030 fused_ordering(885) 00:15:43.030 fused_ordering(886) 00:15:43.030 fused_ordering(887) 00:15:43.030 fused_ordering(888) 00:15:43.030 fused_ordering(889) 00:15:43.030 fused_ordering(890) 00:15:43.030 fused_ordering(891) 00:15:43.030 fused_ordering(892) 00:15:43.030 fused_ordering(893) 00:15:43.030 fused_ordering(894) 00:15:43.030 fused_ordering(895) 00:15:43.030 fused_ordering(896) 00:15:43.030 fused_ordering(897) 00:15:43.030 fused_ordering(898) 00:15:43.030 fused_ordering(899) 00:15:43.030 fused_ordering(900) 00:15:43.030 fused_ordering(901) 00:15:43.030 fused_ordering(902) 00:15:43.030 fused_ordering(903) 00:15:43.030 fused_ordering(904) 00:15:43.030 fused_ordering(905) 00:15:43.030 fused_ordering(906) 00:15:43.030 fused_ordering(907) 00:15:43.030 fused_ordering(908) 00:15:43.030 fused_ordering(909) 00:15:43.030 fused_ordering(910) 00:15:43.030 fused_ordering(911) 00:15:43.030 fused_ordering(912) 00:15:43.030 fused_ordering(913) 00:15:43.030 fused_ordering(914) 00:15:43.030 fused_ordering(915) 00:15:43.030 fused_ordering(916) 00:15:43.030 fused_ordering(917) 00:15:43.030 fused_ordering(918) 00:15:43.030 fused_ordering(919) 00:15:43.030 fused_ordering(920) 00:15:43.030 fused_ordering(921) 00:15:43.030 fused_ordering(922) 00:15:43.030 fused_ordering(923) 00:15:43.030 fused_ordering(924) 00:15:43.030 fused_ordering(925) 00:15:43.030 fused_ordering(926) 00:15:43.030 fused_ordering(927) 00:15:43.030 fused_ordering(928) 00:15:43.030 fused_ordering(929) 00:15:43.030 fused_ordering(930) 00:15:43.030 fused_ordering(931) 00:15:43.030 fused_ordering(932) 00:15:43.030 fused_ordering(933) 00:15:43.030 fused_ordering(934) 00:15:43.030 fused_ordering(935) 00:15:43.030 fused_ordering(936) 00:15:43.030 fused_ordering(937) 00:15:43.030 fused_ordering(938) 00:15:43.030 fused_ordering(939) 00:15:43.030 fused_ordering(940) 00:15:43.030 fused_ordering(941) 00:15:43.030 fused_ordering(942) 00:15:43.030 fused_ordering(943) 00:15:43.030 fused_ordering(944) 00:15:43.030 fused_ordering(945) 00:15:43.030 fused_ordering(946) 00:15:43.030 fused_ordering(947) 00:15:43.030 fused_ordering(948) 00:15:43.030 fused_ordering(949) 00:15:43.030 fused_ordering(950) 00:15:43.030 fused_ordering(951) 00:15:43.030 fused_ordering(952) 00:15:43.030 fused_ordering(953) 00:15:43.030 fused_ordering(954) 00:15:43.030 fused_ordering(955) 00:15:43.031 fused_ordering(956) 00:15:43.031 fused_ordering(957) 00:15:43.031 fused_ordering(958) 00:15:43.031 fused_ordering(959) 00:15:43.031 fused_ordering(960) 00:15:43.031 fused_ordering(961) 00:15:43.031 fused_ordering(962) 00:15:43.031 fused_ordering(963) 00:15:43.031 fused_ordering(964) 00:15:43.031 fused_ordering(965) 00:15:43.031 fused_ordering(966) 00:15:43.031 fused_ordering(967) 00:15:43.031 fused_ordering(968) 00:15:43.031 fused_ordering(969) 00:15:43.031 fused_ordering(970) 00:15:43.031 fused_ordering(971) 00:15:43.031 fused_ordering(972) 00:15:43.031 fused_ordering(973) 00:15:43.031 fused_ordering(974) 00:15:43.031 fused_ordering(975) 00:15:43.031 fused_ordering(976) 00:15:43.031 fused_ordering(977) 00:15:43.031 fused_ordering(978) 00:15:43.031 fused_ordering(979) 00:15:43.031 fused_ordering(980) 00:15:43.031 fused_ordering(981) 00:15:43.031 fused_ordering(982) 00:15:43.031 fused_ordering(983) 00:15:43.031 fused_ordering(984) 00:15:43.031 fused_ordering(985) 00:15:43.031 fused_ordering(986) 00:15:43.031 fused_ordering(987) 00:15:43.031 fused_ordering(988) 00:15:43.031 fused_ordering(989) 00:15:43.031 fused_ordering(990) 00:15:43.031 fused_ordering(991) 00:15:43.031 fused_ordering(992) 00:15:43.031 fused_ordering(993) 00:15:43.031 fused_ordering(994) 00:15:43.031 fused_ordering(995) 00:15:43.031 fused_ordering(996) 00:15:43.031 fused_ordering(997) 00:15:43.031 fused_ordering(998) 00:15:43.031 fused_ordering(999) 00:15:43.031 fused_ordering(1000) 00:15:43.031 fused_ordering(1001) 00:15:43.031 fused_ordering(1002) 00:15:43.031 fused_ordering(1003) 00:15:43.031 fused_ordering(1004) 00:15:43.031 fused_ordering(1005) 00:15:43.031 fused_ordering(1006) 00:15:43.031 fused_ordering(1007) 00:15:43.031 fused_ordering(1008) 00:15:43.031 fused_ordering(1009) 00:15:43.031 fused_ordering(1010) 00:15:43.031 fused_ordering(1011) 00:15:43.031 fused_ordering(1012) 00:15:43.031 fused_ordering(1013) 00:15:43.031 fused_ordering(1014) 00:15:43.031 fused_ordering(1015) 00:15:43.031 fused_ordering(1016) 00:15:43.031 fused_ordering(1017) 00:15:43.031 fused_ordering(1018) 00:15:43.031 fused_ordering(1019) 00:15:43.031 fused_ordering(1020) 00:15:43.031 fused_ordering(1021) 00:15:43.031 fused_ordering(1022) 00:15:43.031 fused_ordering(1023) 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.031 rmmod nvme_tcp 00:15:43.031 rmmod nvme_fabrics 00:15:43.031 rmmod nvme_keyring 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 69780 ']' 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 69780 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 69780 ']' 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 69780 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69780 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:43.031 killing process with pid 69780 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69780' 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 69780 00:15:43.031 [2024-05-15 02:16:30.991752] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:43.031 02:16:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 69780 00:15:43.290 02:16:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.290 02:16:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:43.290 02:16:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:43.290 02:16:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.290 02:16:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.290 02:16:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.290 02:16:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.290 02:16:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.290 02:16:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:43.290 00:15:43.290 real 0m4.508s 00:15:43.290 user 0m5.550s 00:15:43.290 sys 0m1.524s 00:15:43.290 02:16:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:43.290 ************************************ 00:15:43.290 END TEST nvmf_fused_ordering 00:15:43.290 ************************************ 00:15:43.290 02:16:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:43.290 02:16:31 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:43.290 02:16:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:43.290 02:16:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:43.290 02:16:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:43.290 ************************************ 00:15:43.290 START TEST nvmf_delete_subsystem 00:15:43.290 ************************************ 00:15:43.290 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:43.548 * Looking for test storage... 00:15:43.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:43.548 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:43.548 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:15:43.548 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.548 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.548 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.548 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.548 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.548 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.548 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.548 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:43.549 Cannot find device "nvmf_tgt_br" 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:43.549 Cannot find device "nvmf_tgt_br2" 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:43.549 Cannot find device "nvmf_tgt_br" 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:43.549 Cannot find device "nvmf_tgt_br2" 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:43.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:43.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:43.549 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:43.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:15:43.807 00:15:43.807 --- 10.0.0.2 ping statistics --- 00:15:43.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.807 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:43.807 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:43.807 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:43.807 00:15:43.807 --- 10.0.0.3 ping statistics --- 00:15:43.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.807 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:43.807 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:43.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:43.807 00:15:43.808 --- 10.0.0.1 ping statistics --- 00:15:43.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.808 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=70017 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 70017 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 70017 ']' 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:43.808 02:16:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:43.808 [2024-05-15 02:16:31.767054] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:15:43.808 [2024-05-15 02:16:31.767161] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.065 [2024-05-15 02:16:31.900740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:44.066 [2024-05-15 02:16:31.961479] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.066 [2024-05-15 02:16:31.961528] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.066 [2024-05-15 02:16:31.961540] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.066 [2024-05-15 02:16:31.961548] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.066 [2024-05-15 02:16:31.961555] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.066 [2024-05-15 02:16:31.965425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.066 [2024-05-15 02:16:31.965463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.066 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:44.066 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:15:44.066 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:44.066 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:44.066 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:44.324 [2024-05-15 02:16:32.093583] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:44.324 [2024-05-15 02:16:32.110069] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:44.324 [2024-05-15 02:16:32.110420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:44.324 NULL1 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:44.324 Delay0 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=70049 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:44.324 02:16:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:44.324 [2024-05-15 02:16:32.304257] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:46.224 02:16:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:46.224 02:16:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.224 02:16:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:46.482 Write completed with error (sct=0, sc=8) 00:15:46.482 Write completed with error (sct=0, sc=8) 00:15:46.482 Read completed with error (sct=0, sc=8) 00:15:46.482 starting I/O failed: -6 00:15:46.482 Read completed with error (sct=0, sc=8) 00:15:46.482 Read completed with error (sct=0, sc=8) 00:15:46.482 Write completed with error (sct=0, sc=8) 00:15:46.482 Read completed with error (sct=0, sc=8) 00:15:46.482 starting I/O failed: -6 00:15:46.482 Read completed with error (sct=0, sc=8) 00:15:46.482 Read completed with error (sct=0, sc=8) 00:15:46.482 Write completed with error (sct=0, sc=8) 00:15:46.482 Read completed with error (sct=0, sc=8) 00:15:46.482 starting I/O failed: -6 00:15:46.482 Read completed with error (sct=0, sc=8) 00:15:46.482 Write completed with error (sct=0, sc=8) 00:15:46.482 Read completed with error (sct=0, sc=8) 00:15:46.482 Read completed with error (sct=0, sc=8) 00:15:46.482 starting I/O failed: -6 00:15:46.482 Read completed with error (sct=0, sc=8) 00:15:46.482 Read completed with error (sct=0, sc=8) 00:15:46.482 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 [2024-05-15 02:16:34.340523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246d220 is same with the state(5) to be set 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 [2024-05-15 02:16:34.345302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb984000c00 is same with the state(5) to be set 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 Write completed with error (sct=0, sc=8) 00:15:46.483 Read completed with error (sct=0, sc=8) 00:15:46.483 starting I/O failed: -6 00:15:46.483 [2024-05-15 02:16:34.350754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb98400c470 is same with the state(5) to be set 00:15:47.418 [2024-05-15 02:16:35.317935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246b100 is same with the state(5) to be set 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Write completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Write completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Write completed with error (sct=0, sc=8) 00:15:47.418 Write completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Write completed with error (sct=0, sc=8) 00:15:47.418 [2024-05-15 02:16:35.339348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb98400bfe0 is same with the state(5) to be set 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Write completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.418 Write completed with error (sct=0, sc=8) 00:15:47.418 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 [2024-05-15 02:16:35.339993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb98400c780 is same with the state(5) to be set 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 [2024-05-15 02:16:35.343140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246bce0 is same with the state(5) to be set 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Initializing NVMe Controllers 00:15:47.419 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:47.419 Controller IO queue size 128, less than required. 00:15:47.419 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:47.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:47.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:47.419 Initialization complete. Launching workers. 00:15:47.419 ======================================================== 00:15:47.419 Latency(us) 00:15:47.419 Device Information : IOPS MiB/s Average min max 00:15:47.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154.86 0.08 932636.30 401.47 1012866.64 00:15:47.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.70 0.09 916435.48 3212.89 1016169.59 00:15:47.419 ======================================================== 00:15:47.419 Total : 331.56 0.16 924002.33 401.47 1016169.59 00:15:47.419 00:15:47.419 Write completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 Read completed with error (sct=0, sc=8) 00:15:47.419 [2024-05-15 02:16:35.343563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246d040 is same with the state(5) to be set 00:15:47.419 [2024-05-15 02:16:35.344282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246b100 (9): Bad file descriptor 00:15:47.419 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:47.419 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.419 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:47.419 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 70049 00:15:47.419 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 70049 00:15:47.984 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (70049) - No such process 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 70049 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 70049 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 70049 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:47.984 [2024-05-15 02:16:35.867121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=70076 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70076 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:47.984 02:16:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:48.241 [2024-05-15 02:16:36.037429] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:48.499 02:16:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:48.499 02:16:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70076 00:15:48.499 02:16:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:49.065 02:16:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:49.065 02:16:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70076 00:15:49.065 02:16:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:49.668 02:16:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:49.668 02:16:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70076 00:15:49.668 02:16:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:49.926 02:16:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:49.926 02:16:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70076 00:15:49.926 02:16:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:50.492 02:16:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:50.492 02:16:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70076 00:15:50.492 02:16:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:51.059 02:16:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:51.059 02:16:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70076 00:15:51.059 02:16:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:51.318 Initializing NVMe Controllers 00:15:51.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:51.318 Controller IO queue size 128, less than required. 00:15:51.318 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:51.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:51.318 Initialization complete. Launching workers. 00:15:51.318 ======================================================== 00:15:51.318 Latency(us) 00:15:51.318 Device Information : IOPS MiB/s Average min max 00:15:51.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003009.16 1000210.33 1041331.21 00:15:51.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004932.08 1000223.08 1012066.19 00:15:51.318 ======================================================== 00:15:51.318 Total : 256.00 0.12 1003970.62 1000210.33 1041331.21 00:15:51.318 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70076 00:15:51.576 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (70076) - No such process 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 70076 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:51.576 rmmod nvme_tcp 00:15:51.576 rmmod nvme_fabrics 00:15:51.576 rmmod nvme_keyring 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 70017 ']' 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 70017 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 70017 ']' 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 70017 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70017 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70017' 00:15:51.576 killing process with pid 70017 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 70017 00:15:51.576 [2024-05-15 02:16:39.509297] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:51.576 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 70017 00:15:51.835 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:51.835 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:51.835 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:51.835 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.835 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:51.835 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.835 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.835 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.835 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:51.835 ************************************ 00:15:51.835 END TEST nvmf_delete_subsystem 00:15:51.835 ************************************ 00:15:51.835 00:15:51.835 real 0m8.464s 00:15:51.835 user 0m26.966s 00:15:51.835 sys 0m1.419s 00:15:51.835 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:51.835 02:16:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:51.835 02:16:39 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:51.835 02:16:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:51.835 02:16:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:51.835 02:16:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:51.835 ************************************ 00:15:51.835 START TEST nvmf_ns_masking 00:15:51.835 ************************************ 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:51.835 * Looking for test storage... 00:15:51.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:15:51.835 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:52.094 02:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=a15154bf-c90a-454e-8c33-46fb85bf4b89 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:52.095 Cannot find device "nvmf_tgt_br" 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.095 Cannot find device "nvmf_tgt_br2" 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:52.095 Cannot find device "nvmf_tgt_br" 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:52.095 Cannot find device "nvmf_tgt_br2" 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:52.095 02:16:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:52.095 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:52.095 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:52.095 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:52.095 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:52.095 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:52.095 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:52.095 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:52.095 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:52.095 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:52.095 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.095 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:52.095 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:52.095 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:52.095 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:52.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:15:52.354 00:15:52.354 --- 10.0.0.2 ping statistics --- 00:15:52.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.354 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:52.354 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.354 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:15:52.354 00:15:52.354 --- 10.0.0.3 ping statistics --- 00:15:52.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.354 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:52.354 00:15:52.354 --- 10.0.0.1 ping statistics --- 00:15:52.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.354 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=70282 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 70282 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 70282 ']' 00:15:52.354 02:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.355 02:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:52.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.355 02:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.355 02:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:52.355 02:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:52.355 [2024-05-15 02:16:40.259349] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:15:52.355 [2024-05-15 02:16:40.259442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.614 [2024-05-15 02:16:40.396711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.614 [2024-05-15 02:16:40.468990] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.614 [2024-05-15 02:16:40.469053] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.614 [2024-05-15 02:16:40.469067] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.614 [2024-05-15 02:16:40.469078] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.614 [2024-05-15 02:16:40.469087] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.614 [2024-05-15 02:16:40.469577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.614 [2024-05-15 02:16:40.473423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.614 [2024-05-15 02:16:40.473537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.614 [2024-05-15 02:16:40.473554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.614 02:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:52.614 02:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:15:52.614 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:52.614 02:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:52.614 02:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:52.614 02:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.614 02:16:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:52.872 [2024-05-15 02:16:40.859559] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.130 02:16:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:53.130 02:16:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:53.130 02:16:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:53.130 Malloc1 00:15:53.388 02:16:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:53.647 Malloc2 00:15:53.647 02:16:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:53.905 02:16:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:54.163 02:16:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.421 [2024-05-15 02:16:42.292461] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:54.421 [2024-05-15 02:16:42.293045] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.421 02:16:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:15:54.421 02:16:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a15154bf-c90a-454e-8c33-46fb85bf4b89 -a 10.0.0.2 -s 4420 -i 4 00:15:54.421 02:16:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:54.421 02:16:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:54.421 02:16:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:54.421 02:16:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:54.421 02:16:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:56.950 [ 0]:0x1 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bdfc73f0034f444abc5b070ecfac3e18 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bdfc73f0034f444abc5b070ecfac3e18 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:56.950 [ 0]:0x1 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bdfc73f0034f444abc5b070ecfac3e18 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bdfc73f0034f444abc5b070ecfac3e18 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:56.950 [ 1]:0x2 00:15:56.950 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:56.951 02:16:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:57.208 02:16:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3c7405e29689460e8c4d1360348d4adc 00:15:57.208 02:16:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3c7405e29689460e8c4d1360348d4adc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:57.208 02:16:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:15:57.208 02:16:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:57.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.209 02:16:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.466 02:16:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:57.724 02:16:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:15:57.724 02:16:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a15154bf-c90a-454e-8c33-46fb85bf4b89 -a 10.0.0.2 -s 4420 -i 4 00:15:57.724 02:16:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:57.724 02:16:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:57.724 02:16:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:57.724 02:16:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:15:57.724 02:16:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:15:57.724 02:16:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:00.252 [ 0]:0x2 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3c7405e29689460e8c4d1360348d4adc 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3c7405e29689460e8c4d1360348d4adc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:00.252 02:16:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:00.252 [ 0]:0x1 00:16:00.252 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:16:00.252 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:00.252 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:00.252 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:00.252 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:00.510 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bdfc73f0034f444abc5b070ecfac3e18 00:16:00.510 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bdfc73f0034f444abc5b070ecfac3e18 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:00.510 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:16:00.510 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:00.510 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:00.510 [ 1]:0x2 00:16:00.510 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:00.510 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:00.510 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3c7405e29689460e8c4d1360348d4adc 00:16:00.510 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3c7405e29689460e8c4d1360348d4adc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:00.510 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:00.769 [ 0]:0x2 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3c7405e29689460e8c4d1360348d4adc 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3c7405e29689460e8c4d1360348d4adc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:16:00.769 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:01.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.026 02:16:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:01.284 02:16:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:16:01.284 02:16:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a15154bf-c90a-454e-8c33-46fb85bf4b89 -a 10.0.0.2 -s 4420 -i 4 00:16:01.284 02:16:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:01.284 02:16:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:16:01.284 02:16:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:01.284 02:16:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:16:01.284 02:16:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:16:01.284 02:16:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:03.813 [ 0]:0x1 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bdfc73f0034f444abc5b070ecfac3e18 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bdfc73f0034f444abc5b070ecfac3e18 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:03.813 [ 1]:0x2 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3c7405e29689460e8c4d1360348d4adc 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3c7405e29689460e8c4d1360348d4adc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:03.813 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:03.814 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:03.814 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:03.814 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:03.814 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:03.814 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:03.814 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:03.814 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:03.814 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:03.814 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:03.814 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:16:03.814 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:03.814 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:03.814 [ 0]:0x2 00:16:03.814 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:03.814 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:04.085 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3c7405e29689460e8c4d1360348d4adc 00:16:04.085 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3c7405e29689460e8c4d1360348d4adc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:04.085 02:16:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:04.085 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:04.085 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:04.085 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:04.085 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.085 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:04.085 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.085 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:04.085 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.085 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:04.085 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:04.085 02:16:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:04.346 [2024-05-15 02:16:52.138665] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:04.346 2024/05/15 02:16:52 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:16:04.346 request: 00:16:04.346 { 00:16:04.346 "method": "nvmf_ns_remove_host", 00:16:04.346 "params": { 00:16:04.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:04.346 "nsid": 2, 00:16:04.346 "host": "nqn.2016-06.io.spdk:host1" 00:16:04.346 } 00:16:04.346 } 00:16:04.346 Got JSON-RPC error response 00:16:04.346 GoRPCClient: error on JSON-RPC call 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:04.346 [ 0]:0x2 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3c7405e29689460e8c4d1360348d4adc 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3c7405e29689460e8c4d1360348d4adc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:04.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.346 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:04.913 rmmod nvme_tcp 00:16:04.913 rmmod nvme_fabrics 00:16:04.913 rmmod nvme_keyring 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 70282 ']' 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 70282 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 70282 ']' 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 70282 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70282 00:16:04.913 killing process with pid 70282 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70282' 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 70282 00:16:04.913 [2024-05-15 02:16:52.730819] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:04.913 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 70282 00:16:05.172 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:05.172 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:05.172 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:05.172 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.172 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:05.172 02:16:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.172 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.172 02:16:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.172 02:16:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:05.172 ************************************ 00:16:05.172 END TEST nvmf_ns_masking 00:16:05.172 ************************************ 00:16:05.172 00:16:05.172 real 0m13.238s 00:16:05.172 user 0m53.199s 00:16:05.172 sys 0m2.324s 00:16:05.172 02:16:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:05.172 02:16:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:05.172 02:16:53 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:16:05.172 02:16:53 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:16:05.172 02:16:53 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:05.172 02:16:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:05.172 02:16:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:05.172 02:16:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:05.172 ************************************ 00:16:05.172 START TEST nvmf_host_management 00:16:05.172 ************************************ 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:05.172 * Looking for test storage... 00:16:05.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.172 02:16:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:05.173 Cannot find device "nvmf_tgt_br" 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:05.173 Cannot find device "nvmf_tgt_br2" 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:16:05.173 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:05.431 Cannot find device "nvmf_tgt_br" 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:05.431 Cannot find device "nvmf_tgt_br2" 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:05.431 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:05.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:16:05.431 00:16:05.431 --- 10.0.0.2 ping statistics --- 00:16:05.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.431 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:05.432 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:05.432 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:05.432 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:16:05.432 00:16:05.432 --- 10.0.0.3 ping statistics --- 00:16:05.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.432 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:05.432 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:05.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:05.689 00:16:05.689 --- 10.0.0.1 ping statistics --- 00:16:05.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.689 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:05.689 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.689 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:16:05.689 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:05.689 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.689 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:05.689 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:05.689 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.689 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:05.689 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:05.689 02:16:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:05.689 02:16:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:05.690 02:16:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:05.690 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:05.690 02:16:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:05.690 02:16:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:05.690 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=70751 00:16:05.690 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:05.690 02:16:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 70751 00:16:05.690 02:16:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 70751 ']' 00:16:05.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.690 02:16:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.690 02:16:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:05.690 02:16:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.690 02:16:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:05.690 02:16:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:05.690 [2024-05-15 02:16:53.534482] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:16:05.690 [2024-05-15 02:16:53.534595] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.948 [2024-05-15 02:16:53.713145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:05.948 [2024-05-15 02:16:53.788855] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.948 [2024-05-15 02:16:53.788921] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.948 [2024-05-15 02:16:53.788932] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.948 [2024-05-15 02:16:53.788941] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.948 [2024-05-15 02:16:53.788948] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.948 [2024-05-15 02:16:53.789058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.948 [2024-05-15 02:16:53.789850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.948 [2024-05-15 02:16:53.789772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.948 [2024-05-15 02:16:53.789841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:06.881 [2024-05-15 02:16:54.684054] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:06.881 Malloc0 00:16:06.881 [2024-05-15 02:16:54.747823] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:06.881 [2024-05-15 02:16:54.748146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=70817 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 70817 /var/tmp/bdevperf.sock 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 70817 ']' 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:06.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:06.881 02:16:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:06.881 { 00:16:06.881 "params": { 00:16:06.881 "name": "Nvme$subsystem", 00:16:06.881 "trtype": "$TEST_TRANSPORT", 00:16:06.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:06.881 "adrfam": "ipv4", 00:16:06.881 "trsvcid": "$NVMF_PORT", 00:16:06.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:06.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:06.881 "hdgst": ${hdgst:-false}, 00:16:06.881 "ddgst": ${ddgst:-false} 00:16:06.881 }, 00:16:06.881 "method": "bdev_nvme_attach_controller" 00:16:06.881 } 00:16:06.881 EOF 00:16:06.881 )") 00:16:06.882 02:16:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:06.882 02:16:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:06.882 02:16:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:06.882 02:16:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:06.882 "params": { 00:16:06.882 "name": "Nvme0", 00:16:06.882 "trtype": "tcp", 00:16:06.882 "traddr": "10.0.0.2", 00:16:06.882 "adrfam": "ipv4", 00:16:06.882 "trsvcid": "4420", 00:16:06.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:06.882 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:06.882 "hdgst": false, 00:16:06.882 "ddgst": false 00:16:06.882 }, 00:16:06.882 "method": "bdev_nvme_attach_controller" 00:16:06.882 }' 00:16:06.882 [2024-05-15 02:16:54.853139] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:16:06.882 [2024-05-15 02:16:54.853253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70817 ] 00:16:07.140 [2024-05-15 02:16:55.027874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.140 [2024-05-15 02:16:55.109990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.398 Running I/O for 10 seconds... 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:07.966 02:16:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.226 02:16:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:16:08.226 02:16:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:16:08.226 02:16:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:08.226 02:16:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:08.226 02:16:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:08.226 02:16:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:08.226 02:16:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.226 02:16:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.226 [2024-05-15 02:16:56.017179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ef910 is same with the state(5) to be set 00:16:08.226 [2024-05-15 02:16:56.017228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ef910 is same with the state(5) to be set 00:16:08.226 02:16:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.226 02:16:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:08.226 02:16:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.226 02:16:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.226 [2024-05-15 02:16:56.026647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.226 [2024-05-15 02:16:56.026708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.026726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.226 [2024-05-15 02:16:56.026737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.026747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.226 [2024-05-15 02:16:56.026756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.026766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.226 [2024-05-15 02:16:56.026775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.026785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220b740 is same with the state(5) to be set 00:16:08.226 02:16:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.226 02:16:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:08.226 [2024-05-15 02:16:56.035555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.035611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.035645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.035664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.035681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.035695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.035716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.035731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.035750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.035764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.035780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.035795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.035813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.035829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.035847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.035861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.035878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.035893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.035913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.035927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.035944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.035958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.035975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.035989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.036004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.036014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.036025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.036034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.036046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.036055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.036066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.036076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.036087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.036096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.036107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.036123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.036135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.036145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.036156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.036166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.036177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.036186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.036198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.036207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.036219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.226 [2024-05-15 02:16:56.036228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.226 [2024-05-15 02:16:56.036239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.036983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.036992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.037003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.037013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.037024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.037033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.037044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.037053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.037065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.037074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.037084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.037094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.037106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.037115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.037126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.227 [2024-05-15 02:16:56.037135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.227 [2024-05-15 02:16:56.037202] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x220d4f0 was disconnected and freed. reset controller. 00:16:08.228 [2024-05-15 02:16:56.037262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220b740 (9): Bad file descriptor 00:16:08.228 [2024-05-15 02:16:56.038382] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:08.228 task offset: 8192 on job bdev=Nvme0n1 fails 00:16:08.228 00:16:08.228 Latency(us) 00:16:08.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.228 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:08.228 Job: Nvme0n1 ended in about 0.79 seconds with error 00:16:08.228 Verification LBA range: start 0x0 length 0x400 00:16:08.228 Nvme0n1 : 0.79 1381.79 86.36 81.28 0.00 42590.93 2085.24 42657.98 00:16:08.228 =================================================================================================================== 00:16:08.228 Total : 1381.79 86.36 81.28 0.00 42590.93 2085.24 42657.98 00:16:08.228 [2024-05-15 02:16:56.040504] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:08.228 [2024-05-15 02:16:56.050768] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:09.162 02:16:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 70817 00:16:09.162 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (70817) - No such process 00:16:09.162 02:16:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:09.162 02:16:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:09.162 02:16:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:09.162 02:16:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:09.162 02:16:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:09.162 02:16:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:09.162 02:16:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:09.162 02:16:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:09.162 { 00:16:09.162 "params": { 00:16:09.162 "name": "Nvme$subsystem", 00:16:09.162 "trtype": "$TEST_TRANSPORT", 00:16:09.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:09.162 "adrfam": "ipv4", 00:16:09.162 "trsvcid": "$NVMF_PORT", 00:16:09.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:09.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:09.162 "hdgst": ${hdgst:-false}, 00:16:09.162 "ddgst": ${ddgst:-false} 00:16:09.162 }, 00:16:09.162 "method": "bdev_nvme_attach_controller" 00:16:09.162 } 00:16:09.162 EOF 00:16:09.162 )") 00:16:09.162 02:16:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:09.162 02:16:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:09.162 02:16:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:09.162 02:16:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:09.162 "params": { 00:16:09.162 "name": "Nvme0", 00:16:09.162 "trtype": "tcp", 00:16:09.162 "traddr": "10.0.0.2", 00:16:09.162 "adrfam": "ipv4", 00:16:09.162 "trsvcid": "4420", 00:16:09.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:09.162 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:09.162 "hdgst": false, 00:16:09.162 "ddgst": false 00:16:09.162 }, 00:16:09.162 "method": "bdev_nvme_attach_controller" 00:16:09.162 }' 00:16:09.162 [2024-05-15 02:16:57.095702] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:16:09.162 [2024-05-15 02:16:57.095798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70855 ] 00:16:09.419 [2024-05-15 02:16:57.234876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.419 [2024-05-15 02:16:57.306616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.675 Running I/O for 1 seconds... 00:16:10.607 00:16:10.607 Latency(us) 00:16:10.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.607 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:10.607 Verification LBA range: start 0x0 length 0x400 00:16:10.607 Nvme0n1 : 1.04 1415.79 88.49 0.00 0.00 44200.02 5391.83 43134.60 00:16:10.607 =================================================================================================================== 00:16:10.607 Total : 1415.79 88.49 0.00 0.00 44200.02 5391.83 43134.60 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:10.866 rmmod nvme_tcp 00:16:10.866 rmmod nvme_fabrics 00:16:10.866 rmmod nvme_keyring 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 70751 ']' 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 70751 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 70751 ']' 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 70751 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70751 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70751' 00:16:10.866 killing process with pid 70751 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 70751 00:16:10.866 [2024-05-15 02:16:58.803468] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:10.866 02:16:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 70751 00:16:11.125 [2024-05-15 02:16:58.986153] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:11.125 02:16:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:11.125 02:16:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:11.125 02:16:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:11.125 02:16:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.125 02:16:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.125 02:16:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.125 02:16:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.125 02:16:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.125 02:16:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:11.125 02:16:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:11.125 00:16:11.125 real 0m5.995s 00:16:11.125 user 0m23.834s 00:16:11.125 sys 0m1.272s 00:16:11.125 02:16:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:11.125 02:16:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:11.125 ************************************ 00:16:11.125 END TEST nvmf_host_management 00:16:11.125 ************************************ 00:16:11.125 02:16:59 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:11.125 02:16:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:11.125 02:16:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:11.125 02:16:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:11.125 ************************************ 00:16:11.125 START TEST nvmf_lvol 00:16:11.125 ************************************ 00:16:11.125 02:16:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:11.384 * Looking for test storage... 00:16:11.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:11.384 Cannot find device "nvmf_tgt_br" 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:11.384 Cannot find device "nvmf_tgt_br2" 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:11.384 Cannot find device "nvmf_tgt_br" 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:11.384 Cannot find device "nvmf_tgt_br2" 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:11.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:11.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:11.384 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:11.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:16:11.643 00:16:11.643 --- 10.0.0.2 ping statistics --- 00:16:11.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.643 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:11.643 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:11.643 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:16:11.643 00:16:11.643 --- 10.0.0.3 ping statistics --- 00:16:11.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.643 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:11.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:11.643 00:16:11.643 --- 10.0.0.1 ping statistics --- 00:16:11.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.643 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=71047 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 71047 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 71047 ']' 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:11.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:11.643 02:16:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:11.643 [2024-05-15 02:16:59.641225] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:16:11.643 [2024-05-15 02:16:59.641316] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.901 [2024-05-15 02:16:59.782192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:11.901 [2024-05-15 02:16:59.851049] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.901 [2024-05-15 02:16:59.851335] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.901 [2024-05-15 02:16:59.851462] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.901 [2024-05-15 02:16:59.851604] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.901 [2024-05-15 02:16:59.851776] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.902 [2024-05-15 02:16:59.851981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.902 [2024-05-15 02:16:59.852090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.902 [2024-05-15 02:16:59.852096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.160 02:16:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:12.160 02:16:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:12.160 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:12.160 02:16:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:12.160 02:16:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:12.160 02:16:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.160 02:16:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:12.418 [2024-05-15 02:17:00.238760] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.418 02:17:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:12.676 02:17:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:12.676 02:17:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:12.933 02:17:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:12.933 02:17:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:13.191 02:17:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:13.449 02:17:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d9be5089-3667-4ec1-9e8a-558188d9d0bc 00:16:13.449 02:17:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d9be5089-3667-4ec1-9e8a-558188d9d0bc lvol 20 00:16:13.706 02:17:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=42e8b07e-ed19-4d71-a2d2-216e8129ee8a 00:16:13.706 02:17:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:14.304 02:17:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 42e8b07e-ed19-4d71-a2d2-216e8129ee8a 00:16:14.304 02:17:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:14.563 [2024-05-15 02:17:02.515813] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:14.563 [2024-05-15 02:17:02.516087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.563 02:17:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:15.128 02:17:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=71163 00:16:15.128 02:17:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:15.128 02:17:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:16.061 02:17:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 42e8b07e-ed19-4d71-a2d2-216e8129ee8a MY_SNAPSHOT 00:16:16.319 02:17:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4fe4e745-d7ff-4316-b542-c33f05738762 00:16:16.319 02:17:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 42e8b07e-ed19-4d71-a2d2-216e8129ee8a 30 00:16:16.884 02:17:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 4fe4e745-d7ff-4316-b542-c33f05738762 MY_CLONE 00:16:17.142 02:17:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d67fe9e5-2d01-4bd4-ba2f-b6fb49bdb381 00:16:17.142 02:17:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate d67fe9e5-2d01-4bd4-ba2f-b6fb49bdb381 00:16:17.707 02:17:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 71163 00:16:25.815 Initializing NVMe Controllers 00:16:25.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:25.815 Controller IO queue size 128, less than required. 00:16:25.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:25.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:25.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:25.815 Initialization complete. Launching workers. 00:16:25.815 ======================================================== 00:16:25.815 Latency(us) 00:16:25.815 Device Information : IOPS MiB/s Average min max 00:16:25.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9827.30 38.39 13027.43 1685.73 123490.20 00:16:25.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9553.10 37.32 13399.98 3376.75 49205.48 00:16:25.815 ======================================================== 00:16:25.815 Total : 19380.40 75.70 13211.07 1685.73 123490.20 00:16:25.815 00:16:25.815 02:17:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:25.815 02:17:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 42e8b07e-ed19-4d71-a2d2-216e8129ee8a 00:16:26.105 02:17:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d9be5089-3667-4ec1-9e8a-558188d9d0bc 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:26.384 rmmod nvme_tcp 00:16:26.384 rmmod nvme_fabrics 00:16:26.384 rmmod nvme_keyring 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 71047 ']' 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 71047 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 71047 ']' 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 71047 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71047 00:16:26.384 killing process with pid 71047 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71047' 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 71047 00:16:26.384 [2024-05-15 02:17:14.222409] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:26.384 02:17:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 71047 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:26.642 00:16:26.642 real 0m15.381s 00:16:26.642 user 1m4.752s 00:16:26.642 sys 0m3.912s 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:26.642 ************************************ 00:16:26.642 END TEST nvmf_lvol 00:16:26.642 ************************************ 00:16:26.642 02:17:14 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:26.642 02:17:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:26.642 02:17:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:26.642 02:17:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:26.642 ************************************ 00:16:26.642 START TEST nvmf_lvs_grow 00:16:26.642 ************************************ 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:26.642 * Looking for test storage... 00:16:26.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.642 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:26.643 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:26.901 Cannot find device "nvmf_tgt_br" 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.901 Cannot find device "nvmf_tgt_br2" 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:26.901 Cannot find device "nvmf_tgt_br" 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:26.901 Cannot find device "nvmf_tgt_br2" 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:26.901 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:27.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:16:27.160 00:16:27.160 --- 10.0.0.2 ping statistics --- 00:16:27.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.160 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:27.160 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:27.160 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:16:27.160 00:16:27.160 --- 10.0.0.3 ping statistics --- 00:16:27.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.160 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:27.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:27.160 00:16:27.160 --- 10.0.0.1 ping statistics --- 00:16:27.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.160 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=71460 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 71460 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 71460 ']' 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:27.160 02:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:27.160 [2024-05-15 02:17:15.025896] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:16:27.160 [2024-05-15 02:17:15.026116] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.160 [2024-05-15 02:17:15.162523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.418 [2024-05-15 02:17:15.221550] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.418 [2024-05-15 02:17:15.221608] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.418 [2024-05-15 02:17:15.221620] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.418 [2024-05-15 02:17:15.221629] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.418 [2024-05-15 02:17:15.221636] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.418 [2024-05-15 02:17:15.221662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.418 02:17:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:27.418 02:17:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:27.418 02:17:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:27.418 02:17:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:27.418 02:17:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:27.418 02:17:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.418 02:17:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:27.676 [2024-05-15 02:17:15.625840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.676 02:17:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:27.676 02:17:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:27.676 02:17:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:27.676 02:17:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:27.676 ************************************ 00:16:27.676 START TEST lvs_grow_clean 00:16:27.676 ************************************ 00:16:27.676 02:17:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:27.676 02:17:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:27.676 02:17:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:27.676 02:17:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:27.676 02:17:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:27.676 02:17:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:27.676 02:17:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:27.676 02:17:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:27.676 02:17:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:27.676 02:17:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:27.934 02:17:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:27.934 02:17:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:28.500 02:17:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=df045010-1adb-4775-953b-0714df353aa5 00:16:28.500 02:17:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:28.500 02:17:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df045010-1adb-4775-953b-0714df353aa5 00:16:28.758 02:17:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:28.758 02:17:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:28.758 02:17:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u df045010-1adb-4775-953b-0714df353aa5 lvol 150 00:16:29.017 02:17:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8aea5410-91a8-4f5d-8679-a863589b7b21 00:16:29.017 02:17:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:29.017 02:17:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:29.275 [2024-05-15 02:17:17.137452] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:29.275 [2024-05-15 02:17:17.137535] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:29.275 true 00:16:29.275 02:17:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:29.275 02:17:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df045010-1adb-4775-953b-0714df353aa5 00:16:29.533 02:17:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:29.533 02:17:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:29.791 02:17:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8aea5410-91a8-4f5d-8679-a863589b7b21 00:16:30.050 02:17:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:30.347 [2024-05-15 02:17:18.237848] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:30.347 [2024-05-15 02:17:18.238113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.347 02:17:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:30.604 02:17:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=71590 00:16:30.604 02:17:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:30.863 02:17:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:30.863 02:17:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 71590 /var/tmp/bdevperf.sock 00:16:30.863 02:17:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 71590 ']' 00:16:30.863 02:17:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:30.863 02:17:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:30.863 02:17:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:30.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:30.863 02:17:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:30.863 02:17:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:30.863 [2024-05-15 02:17:18.668989] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:16:30.863 [2024-05-15 02:17:18.669085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71590 ] 00:16:30.863 [2024-05-15 02:17:18.812401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.121 [2024-05-15 02:17:18.897556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.688 02:17:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:31.688 02:17:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:16:31.688 02:17:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:32.253 Nvme0n1 00:16:32.253 02:17:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:32.511 [ 00:16:32.511 { 00:16:32.511 "aliases": [ 00:16:32.511 "8aea5410-91a8-4f5d-8679-a863589b7b21" 00:16:32.511 ], 00:16:32.511 "assigned_rate_limits": { 00:16:32.511 "r_mbytes_per_sec": 0, 00:16:32.511 "rw_ios_per_sec": 0, 00:16:32.511 "rw_mbytes_per_sec": 0, 00:16:32.511 "w_mbytes_per_sec": 0 00:16:32.511 }, 00:16:32.511 "block_size": 4096, 00:16:32.511 "claimed": false, 00:16:32.511 "driver_specific": { 00:16:32.511 "mp_policy": "active_passive", 00:16:32.511 "nvme": [ 00:16:32.511 { 00:16:32.511 "ctrlr_data": { 00:16:32.511 "ana_reporting": false, 00:16:32.511 "cntlid": 1, 00:16:32.511 "firmware_revision": "24.05", 00:16:32.511 "model_number": "SPDK bdev Controller", 00:16:32.511 "multi_ctrlr": true, 00:16:32.511 "oacs": { 00:16:32.511 "firmware": 0, 00:16:32.511 "format": 0, 00:16:32.511 "ns_manage": 0, 00:16:32.511 "security": 0 00:16:32.511 }, 00:16:32.511 "serial_number": "SPDK0", 00:16:32.511 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:32.511 "vendor_id": "0x8086" 00:16:32.511 }, 00:16:32.511 "ns_data": { 00:16:32.511 "can_share": true, 00:16:32.511 "id": 1 00:16:32.511 }, 00:16:32.511 "trid": { 00:16:32.511 "adrfam": "IPv4", 00:16:32.511 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:32.511 "traddr": "10.0.0.2", 00:16:32.511 "trsvcid": "4420", 00:16:32.511 "trtype": "TCP" 00:16:32.511 }, 00:16:32.511 "vs": { 00:16:32.511 "nvme_version": "1.3" 00:16:32.511 } 00:16:32.511 } 00:16:32.511 ] 00:16:32.511 }, 00:16:32.511 "memory_domains": [ 00:16:32.511 { 00:16:32.511 "dma_device_id": "system", 00:16:32.511 "dma_device_type": 1 00:16:32.511 } 00:16:32.511 ], 00:16:32.511 "name": "Nvme0n1", 00:16:32.511 "num_blocks": 38912, 00:16:32.511 "product_name": "NVMe disk", 00:16:32.511 "supported_io_types": { 00:16:32.511 "abort": true, 00:16:32.511 "compare": true, 00:16:32.511 "compare_and_write": true, 00:16:32.511 "flush": true, 00:16:32.511 "nvme_admin": true, 00:16:32.511 "nvme_io": true, 00:16:32.511 "read": true, 00:16:32.511 "reset": true, 00:16:32.511 "unmap": true, 00:16:32.511 "write": true, 00:16:32.511 "write_zeroes": true 00:16:32.511 }, 00:16:32.511 "uuid": "8aea5410-91a8-4f5d-8679-a863589b7b21", 00:16:32.511 "zoned": false 00:16:32.511 } 00:16:32.511 ] 00:16:32.511 02:17:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=71626 00:16:32.511 02:17:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:32.511 02:17:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:32.511 Running I/O for 10 seconds... 00:16:33.444 Latency(us) 00:16:33.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:33.444 Nvme0n1 : 1.00 7609.00 29.72 0.00 0.00 0.00 0.00 0.00 00:16:33.444 =================================================================================================================== 00:16:33.444 Total : 7609.00 29.72 0.00 0.00 0.00 0.00 0.00 00:16:33.444 00:16:34.379 02:17:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u df045010-1adb-4775-953b-0714df353aa5 00:16:34.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.664 Nvme0n1 : 2.00 7538.50 29.45 0.00 0.00 0.00 0.00 0.00 00:16:34.664 =================================================================================================================== 00:16:34.664 Total : 7538.50 29.45 0.00 0.00 0.00 0.00 0.00 00:16:34.664 00:16:34.924 true 00:16:34.924 02:17:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df045010-1adb-4775-953b-0714df353aa5 00:16:34.924 02:17:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:35.182 02:17:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:35.182 02:17:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:35.182 02:17:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 71626 00:16:35.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.441 Nvme0n1 : 3.00 7495.33 29.28 0.00 0.00 0.00 0.00 0.00 00:16:35.441 =================================================================================================================== 00:16:35.441 Total : 7495.33 29.28 0.00 0.00 0.00 0.00 0.00 00:16:35.441 00:16:36.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.819 Nvme0n1 : 4.00 7250.25 28.32 0.00 0.00 0.00 0.00 0.00 00:16:36.819 =================================================================================================================== 00:16:36.819 Total : 7250.25 28.32 0.00 0.00 0.00 0.00 0.00 00:16:36.819 00:16:37.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.387 Nvme0n1 : 5.00 7297.80 28.51 0.00 0.00 0.00 0.00 0.00 00:16:37.387 =================================================================================================================== 00:16:37.387 Total : 7297.80 28.51 0.00 0.00 0.00 0.00 0.00 00:16:37.387 00:16:38.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.762 Nvme0n1 : 6.00 7329.67 28.63 0.00 0.00 0.00 0.00 0.00 00:16:38.762 =================================================================================================================== 00:16:38.762 Total : 7329.67 28.63 0.00 0.00 0.00 0.00 0.00 00:16:38.762 00:16:39.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.709 Nvme0n1 : 7.00 7372.29 28.80 0.00 0.00 0.00 0.00 0.00 00:16:39.709 =================================================================================================================== 00:16:39.709 Total : 7372.29 28.80 0.00 0.00 0.00 0.00 0.00 00:16:39.709 00:16:40.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.644 Nvme0n1 : 8.00 7391.62 28.87 0.00 0.00 0.00 0.00 0.00 00:16:40.644 =================================================================================================================== 00:16:40.644 Total : 7391.62 28.87 0.00 0.00 0.00 0.00 0.00 00:16:40.644 00:16:41.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.580 Nvme0n1 : 9.00 7424.78 29.00 0.00 0.00 0.00 0.00 0.00 00:16:41.580 =================================================================================================================== 00:16:41.580 Total : 7424.78 29.00 0.00 0.00 0.00 0.00 0.00 00:16:41.580 00:16:42.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.517 Nvme0n1 : 10.00 7444.60 29.08 0.00 0.00 0.00 0.00 0.00 00:16:42.517 =================================================================================================================== 00:16:42.517 Total : 7444.60 29.08 0.00 0.00 0.00 0.00 0.00 00:16:42.517 00:16:42.517 00:16:42.517 Latency(us) 00:16:42.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.517 Nvme0n1 : 10.01 7447.99 29.09 0.00 0.00 17173.45 8043.05 53858.68 00:16:42.517 =================================================================================================================== 00:16:42.517 Total : 7447.99 29.09 0.00 0.00 17173.45 8043.05 53858.68 00:16:42.517 0 00:16:42.517 02:17:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 71590 00:16:42.517 02:17:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 71590 ']' 00:16:42.517 02:17:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 71590 00:16:42.517 02:17:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:16:42.517 02:17:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:42.517 02:17:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71590 00:16:42.517 02:17:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:42.517 02:17:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:42.517 02:17:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71590' 00:16:42.517 killing process with pid 71590 00:16:42.517 02:17:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 71590 00:16:42.517 Received shutdown signal, test time was about 10.000000 seconds 00:16:42.517 00:16:42.517 Latency(us) 00:16:42.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.517 =================================================================================================================== 00:16:42.517 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.517 02:17:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 71590 00:16:42.775 02:17:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:43.033 02:17:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:43.292 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:43.292 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df045010-1adb-4775-953b-0714df353aa5 00:16:43.551 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:43.551 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:43.551 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:43.808 [2024-05-15 02:17:31.725076] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:43.808 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df045010-1adb-4775-953b-0714df353aa5 00:16:43.808 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:16:43.808 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df045010-1adb-4775-953b-0714df353aa5 00:16:43.808 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:43.808 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:43.808 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:43.808 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:43.808 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:43.808 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:43.808 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:43.809 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:43.809 02:17:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df045010-1adb-4775-953b-0714df353aa5 00:16:44.066 2024/05/15 02:17:32 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:df045010-1adb-4775-953b-0714df353aa5], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:16:44.066 request: 00:16:44.066 { 00:16:44.066 "method": "bdev_lvol_get_lvstores", 00:16:44.066 "params": { 00:16:44.066 "uuid": "df045010-1adb-4775-953b-0714df353aa5" 00:16:44.066 } 00:16:44.066 } 00:16:44.066 Got JSON-RPC error response 00:16:44.066 GoRPCClient: error on JSON-RPC call 00:16:44.066 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:16:44.066 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:44.066 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:44.066 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:44.066 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:44.632 aio_bdev 00:16:44.632 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8aea5410-91a8-4f5d-8679-a863589b7b21 00:16:44.632 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=8aea5410-91a8-4f5d-8679-a863589b7b21 00:16:44.632 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:44.632 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:16:44.632 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:44.632 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:44.632 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:44.890 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8aea5410-91a8-4f5d-8679-a863589b7b21 -t 2000 00:16:44.890 [ 00:16:44.890 { 00:16:44.890 "aliases": [ 00:16:44.890 "lvs/lvol" 00:16:44.890 ], 00:16:44.890 "assigned_rate_limits": { 00:16:44.890 "r_mbytes_per_sec": 0, 00:16:44.890 "rw_ios_per_sec": 0, 00:16:44.890 "rw_mbytes_per_sec": 0, 00:16:44.890 "w_mbytes_per_sec": 0 00:16:44.890 }, 00:16:44.890 "block_size": 4096, 00:16:44.890 "claimed": false, 00:16:44.890 "driver_specific": { 00:16:44.890 "lvol": { 00:16:44.890 "base_bdev": "aio_bdev", 00:16:44.890 "clone": false, 00:16:44.890 "esnap_clone": false, 00:16:44.890 "lvol_store_uuid": "df045010-1adb-4775-953b-0714df353aa5", 00:16:44.890 "num_allocated_clusters": 38, 00:16:44.890 "snapshot": false, 00:16:44.890 "thin_provision": false 00:16:44.890 } 00:16:44.890 }, 00:16:44.890 "name": "8aea5410-91a8-4f5d-8679-a863589b7b21", 00:16:44.890 "num_blocks": 38912, 00:16:44.890 "product_name": "Logical Volume", 00:16:44.890 "supported_io_types": { 00:16:44.890 "abort": false, 00:16:44.890 "compare": false, 00:16:44.890 "compare_and_write": false, 00:16:44.890 "flush": false, 00:16:44.890 "nvme_admin": false, 00:16:44.890 "nvme_io": false, 00:16:44.890 "read": true, 00:16:44.890 "reset": true, 00:16:44.891 "unmap": true, 00:16:44.891 "write": true, 00:16:44.891 "write_zeroes": true 00:16:44.891 }, 00:16:44.891 "uuid": "8aea5410-91a8-4f5d-8679-a863589b7b21", 00:16:44.891 "zoned": false 00:16:44.891 } 00:16:44.891 ] 00:16:44.891 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:16:45.149 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df045010-1adb-4775-953b-0714df353aa5 00:16:45.149 02:17:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:45.149 02:17:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:45.149 02:17:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df045010-1adb-4775-953b-0714df353aa5 00:16:45.149 02:17:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:45.713 02:17:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:45.714 02:17:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8aea5410-91a8-4f5d-8679-a863589b7b21 00:16:45.714 02:17:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u df045010-1adb-4775-953b-0714df353aa5 00:16:46.279 02:17:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:46.538 02:17:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:46.796 ************************************ 00:16:46.796 END TEST lvs_grow_clean 00:16:46.796 ************************************ 00:16:46.796 00:16:46.796 real 0m19.096s 00:16:46.796 user 0m18.548s 00:16:46.796 sys 0m2.171s 00:16:46.796 02:17:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:46.796 02:17:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:46.796 02:17:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:46.796 02:17:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:46.796 02:17:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:46.796 02:17:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:47.063 ************************************ 00:16:47.063 START TEST lvs_grow_dirty 00:16:47.063 ************************************ 00:16:47.063 02:17:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:16:47.063 02:17:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:47.063 02:17:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:47.063 02:17:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:47.063 02:17:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:47.063 02:17:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:47.063 02:17:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:47.064 02:17:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:47.064 02:17:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:47.064 02:17:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:47.331 02:17:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:47.331 02:17:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:47.589 02:17:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4c293820-11bf-4351-82ee-7dd937fb0788 00:16:47.589 02:17:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c293820-11bf-4351-82ee-7dd937fb0788 00:16:47.589 02:17:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:47.846 02:17:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:47.846 02:17:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:47.846 02:17:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4c293820-11bf-4351-82ee-7dd937fb0788 lvol 150 00:16:48.413 02:17:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=77daea2c-dc0a-440b-8207-1a75c8023dc8 00:16:48.413 02:17:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:48.413 02:17:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:48.413 [2024-05-15 02:17:36.395101] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:48.413 [2024-05-15 02:17:36.395181] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:48.413 true 00:16:48.413 02:17:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c293820-11bf-4351-82ee-7dd937fb0788 00:16:48.413 02:17:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:48.980 02:17:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:48.980 02:17:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:49.239 02:17:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 77daea2c-dc0a-440b-8207-1a75c8023dc8 00:16:49.498 02:17:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:49.498 [2024-05-15 02:17:37.503636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.756 02:17:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:49.756 02:17:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:49.756 02:17:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=71929 00:16:49.756 02:17:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:49.756 02:17:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 71929 /var/tmp/bdevperf.sock 00:16:49.756 02:17:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 71929 ']' 00:16:49.756 02:17:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:49.756 02:17:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:49.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:49.756 02:17:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:49.756 02:17:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:49.756 02:17:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:50.015 [2024-05-15 02:17:37.808136] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:16:50.015 [2024-05-15 02:17:37.808236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71929 ] 00:16:50.015 [2024-05-15 02:17:37.943644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.015 [2024-05-15 02:17:38.014350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.949 02:17:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:50.949 02:17:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:16:50.949 02:17:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:51.207 Nvme0n1 00:16:51.207 02:17:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:51.464 [ 00:16:51.464 { 00:16:51.464 "aliases": [ 00:16:51.464 "77daea2c-dc0a-440b-8207-1a75c8023dc8" 00:16:51.464 ], 00:16:51.464 "assigned_rate_limits": { 00:16:51.464 "r_mbytes_per_sec": 0, 00:16:51.464 "rw_ios_per_sec": 0, 00:16:51.464 "rw_mbytes_per_sec": 0, 00:16:51.464 "w_mbytes_per_sec": 0 00:16:51.464 }, 00:16:51.464 "block_size": 4096, 00:16:51.464 "claimed": false, 00:16:51.464 "driver_specific": { 00:16:51.464 "mp_policy": "active_passive", 00:16:51.464 "nvme": [ 00:16:51.464 { 00:16:51.464 "ctrlr_data": { 00:16:51.464 "ana_reporting": false, 00:16:51.464 "cntlid": 1, 00:16:51.464 "firmware_revision": "24.05", 00:16:51.464 "model_number": "SPDK bdev Controller", 00:16:51.464 "multi_ctrlr": true, 00:16:51.464 "oacs": { 00:16:51.464 "firmware": 0, 00:16:51.464 "format": 0, 00:16:51.464 "ns_manage": 0, 00:16:51.464 "security": 0 00:16:51.464 }, 00:16:51.464 "serial_number": "SPDK0", 00:16:51.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:51.464 "vendor_id": "0x8086" 00:16:51.464 }, 00:16:51.464 "ns_data": { 00:16:51.464 "can_share": true, 00:16:51.464 "id": 1 00:16:51.464 }, 00:16:51.464 "trid": { 00:16:51.464 "adrfam": "IPv4", 00:16:51.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:51.464 "traddr": "10.0.0.2", 00:16:51.464 "trsvcid": "4420", 00:16:51.464 "trtype": "TCP" 00:16:51.464 }, 00:16:51.464 "vs": { 00:16:51.464 "nvme_version": "1.3" 00:16:51.464 } 00:16:51.464 } 00:16:51.464 ] 00:16:51.464 }, 00:16:51.464 "memory_domains": [ 00:16:51.464 { 00:16:51.464 "dma_device_id": "system", 00:16:51.464 "dma_device_type": 1 00:16:51.464 } 00:16:51.464 ], 00:16:51.464 "name": "Nvme0n1", 00:16:51.464 "num_blocks": 38912, 00:16:51.464 "product_name": "NVMe disk", 00:16:51.464 "supported_io_types": { 00:16:51.464 "abort": true, 00:16:51.464 "compare": true, 00:16:51.464 "compare_and_write": true, 00:16:51.464 "flush": true, 00:16:51.464 "nvme_admin": true, 00:16:51.464 "nvme_io": true, 00:16:51.464 "read": true, 00:16:51.464 "reset": true, 00:16:51.464 "unmap": true, 00:16:51.464 "write": true, 00:16:51.464 "write_zeroes": true 00:16:51.464 }, 00:16:51.464 "uuid": "77daea2c-dc0a-440b-8207-1a75c8023dc8", 00:16:51.464 "zoned": false 00:16:51.464 } 00:16:51.464 ] 00:16:51.464 02:17:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=71970 00:16:51.464 02:17:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:51.464 02:17:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:51.722 Running I/O for 10 seconds... 00:16:52.657 Latency(us) 00:16:52.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.657 Nvme0n1 : 1.00 8051.00 31.45 0.00 0.00 0.00 0.00 0.00 00:16:52.657 =================================================================================================================== 00:16:52.657 Total : 8051.00 31.45 0.00 0.00 0.00 0.00 0.00 00:16:52.657 00:16:53.592 02:17:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4c293820-11bf-4351-82ee-7dd937fb0788 00:16:53.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:53.592 Nvme0n1 : 2.00 8190.00 31.99 0.00 0.00 0.00 0.00 0.00 00:16:53.592 =================================================================================================================== 00:16:53.592 Total : 8190.00 31.99 0.00 0.00 0.00 0.00 0.00 00:16:53.592 00:16:53.850 true 00:16:53.850 02:17:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c293820-11bf-4351-82ee-7dd937fb0788 00:16:53.850 02:17:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:54.108 02:17:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:54.108 02:17:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:54.108 02:17:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 71970 00:16:54.674 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:54.674 Nvme0n1 : 3.00 8294.33 32.40 0.00 0.00 0.00 0.00 0.00 00:16:54.674 =================================================================================================================== 00:16:54.674 Total : 8294.33 32.40 0.00 0.00 0.00 0.00 0.00 00:16:54.674 00:16:55.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.608 Nvme0n1 : 4.00 8322.50 32.51 0.00 0.00 0.00 0.00 0.00 00:16:55.608 =================================================================================================================== 00:16:55.608 Total : 8322.50 32.51 0.00 0.00 0.00 0.00 0.00 00:16:55.608 00:16:56.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.549 Nvme0n1 : 5.00 8334.80 32.56 0.00 0.00 0.00 0.00 0.00 00:16:56.549 =================================================================================================================== 00:16:56.549 Total : 8334.80 32.56 0.00 0.00 0.00 0.00 0.00 00:16:56.549 00:16:57.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.924 Nvme0n1 : 6.00 7698.17 30.07 0.00 0.00 0.00 0.00 0.00 00:16:57.924 =================================================================================================================== 00:16:57.924 Total : 7698.17 30.07 0.00 0.00 0.00 0.00 0.00 00:16:57.924 00:16:58.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.861 Nvme0n1 : 7.00 7584.43 29.63 0.00 0.00 0.00 0.00 0.00 00:16:58.861 =================================================================================================================== 00:16:58.861 Total : 7584.43 29.63 0.00 0.00 0.00 0.00 0.00 00:16:58.861 00:16:59.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.797 Nvme0n1 : 8.00 7629.38 29.80 0.00 0.00 0.00 0.00 0.00 00:16:59.797 =================================================================================================================== 00:16:59.797 Total : 7629.38 29.80 0.00 0.00 0.00 0.00 0.00 00:16:59.797 00:17:00.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.734 Nvme0n1 : 9.00 7665.44 29.94 0.00 0.00 0.00 0.00 0.00 00:17:00.734 =================================================================================================================== 00:17:00.734 Total : 7665.44 29.94 0.00 0.00 0.00 0.00 0.00 00:17:00.734 00:17:01.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.670 Nvme0n1 : 10.00 7684.40 30.02 0.00 0.00 0.00 0.00 0.00 00:17:01.670 =================================================================================================================== 00:17:01.670 Total : 7684.40 30.02 0.00 0.00 0.00 0.00 0.00 00:17:01.670 00:17:01.670 00:17:01.670 Latency(us) 00:17:01.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.670 Nvme0n1 : 10.01 7687.50 30.03 0.00 0.00 16639.07 7208.96 583389.56 00:17:01.670 =================================================================================================================== 00:17:01.670 Total : 7687.50 30.03 0.00 0.00 16639.07 7208.96 583389.56 00:17:01.670 0 00:17:01.670 02:17:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 71929 00:17:01.670 02:17:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 71929 ']' 00:17:01.670 02:17:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 71929 00:17:01.670 02:17:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:01.670 02:17:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:01.670 02:17:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71929 00:17:01.670 killing process with pid 71929 00:17:01.671 Received shutdown signal, test time was about 10.000000 seconds 00:17:01.671 00:17:01.671 Latency(us) 00:17:01.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.671 =================================================================================================================== 00:17:01.671 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.671 02:17:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:01.671 02:17:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:01.671 02:17:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71929' 00:17:01.671 02:17:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 71929 00:17:01.671 02:17:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 71929 00:17:01.930 02:17:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:02.189 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:02.448 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c293820-11bf-4351-82ee-7dd937fb0788 00:17:02.448 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 71460 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 71460 00:17:02.706 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 71460 Killed "${NVMF_APP[@]}" "$@" 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=72067 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 72067 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 72067 ']' 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:02.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:02.706 02:17:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:02.706 [2024-05-15 02:17:50.665866] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:02.706 [2024-05-15 02:17:50.665953] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.967 [2024-05-15 02:17:50.801012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.967 [2024-05-15 02:17:50.870676] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.967 [2024-05-15 02:17:50.870736] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.967 [2024-05-15 02:17:50.870750] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.967 [2024-05-15 02:17:50.870760] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.967 [2024-05-15 02:17:50.870769] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.967 [2024-05-15 02:17:50.870798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.902 02:17:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:03.902 02:17:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:03.902 02:17:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:03.902 02:17:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.902 02:17:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:03.902 02:17:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.902 02:17:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:04.160 [2024-05-15 02:17:51.990932] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:04.160 [2024-05-15 02:17:51.991221] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:04.160 [2024-05-15 02:17:51.991448] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:04.160 02:17:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:04.160 02:17:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 77daea2c-dc0a-440b-8207-1a75c8023dc8 00:17:04.160 02:17:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=77daea2c-dc0a-440b-8207-1a75c8023dc8 00:17:04.160 02:17:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:04.161 02:17:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:04.161 02:17:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:04.161 02:17:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:04.161 02:17:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:04.418 02:17:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 77daea2c-dc0a-440b-8207-1a75c8023dc8 -t 2000 00:17:04.677 [ 00:17:04.677 { 00:17:04.677 "aliases": [ 00:17:04.677 "lvs/lvol" 00:17:04.677 ], 00:17:04.677 "assigned_rate_limits": { 00:17:04.677 "r_mbytes_per_sec": 0, 00:17:04.677 "rw_ios_per_sec": 0, 00:17:04.677 "rw_mbytes_per_sec": 0, 00:17:04.677 "w_mbytes_per_sec": 0 00:17:04.677 }, 00:17:04.677 "block_size": 4096, 00:17:04.677 "claimed": false, 00:17:04.677 "driver_specific": { 00:17:04.677 "lvol": { 00:17:04.677 "base_bdev": "aio_bdev", 00:17:04.677 "clone": false, 00:17:04.677 "esnap_clone": false, 00:17:04.677 "lvol_store_uuid": "4c293820-11bf-4351-82ee-7dd937fb0788", 00:17:04.677 "num_allocated_clusters": 38, 00:17:04.677 "snapshot": false, 00:17:04.677 "thin_provision": false 00:17:04.677 } 00:17:04.677 }, 00:17:04.677 "name": "77daea2c-dc0a-440b-8207-1a75c8023dc8", 00:17:04.677 "num_blocks": 38912, 00:17:04.677 "product_name": "Logical Volume", 00:17:04.677 "supported_io_types": { 00:17:04.677 "abort": false, 00:17:04.677 "compare": false, 00:17:04.677 "compare_and_write": false, 00:17:04.677 "flush": false, 00:17:04.677 "nvme_admin": false, 00:17:04.677 "nvme_io": false, 00:17:04.677 "read": true, 00:17:04.677 "reset": true, 00:17:04.677 "unmap": true, 00:17:04.677 "write": true, 00:17:04.677 "write_zeroes": true 00:17:04.677 }, 00:17:04.677 "uuid": "77daea2c-dc0a-440b-8207-1a75c8023dc8", 00:17:04.677 "zoned": false 00:17:04.677 } 00:17:04.677 ] 00:17:04.677 02:17:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:04.677 02:17:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:04.677 02:17:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c293820-11bf-4351-82ee-7dd937fb0788 00:17:04.935 02:17:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:04.935 02:17:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:04.935 02:17:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c293820-11bf-4351-82ee-7dd937fb0788 00:17:05.504 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:05.504 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:05.504 [2024-05-15 02:17:53.456642] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:05.504 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c293820-11bf-4351-82ee-7dd937fb0788 00:17:05.504 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:05.504 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c293820-11bf-4351-82ee-7dd937fb0788 00:17:05.504 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:05.504 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.504 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:05.504 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.504 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:05.504 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.504 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:05.504 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:05.504 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c293820-11bf-4351-82ee-7dd937fb0788 00:17:05.763 2024/05/15 02:17:53 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:4c293820-11bf-4351-82ee-7dd937fb0788], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:17:06.021 request: 00:17:06.021 { 00:17:06.021 "method": "bdev_lvol_get_lvstores", 00:17:06.021 "params": { 00:17:06.021 "uuid": "4c293820-11bf-4351-82ee-7dd937fb0788" 00:17:06.021 } 00:17:06.021 } 00:17:06.021 Got JSON-RPC error response 00:17:06.021 GoRPCClient: error on JSON-RPC call 00:17:06.021 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:06.021 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:06.021 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:06.021 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:06.021 02:17:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:06.279 aio_bdev 00:17:06.279 02:17:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 77daea2c-dc0a-440b-8207-1a75c8023dc8 00:17:06.279 02:17:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=77daea2c-dc0a-440b-8207-1a75c8023dc8 00:17:06.279 02:17:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:06.279 02:17:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:06.279 02:17:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:06.279 02:17:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:06.279 02:17:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:06.537 02:17:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 77daea2c-dc0a-440b-8207-1a75c8023dc8 -t 2000 00:17:06.796 [ 00:17:06.796 { 00:17:06.796 "aliases": [ 00:17:06.796 "lvs/lvol" 00:17:06.796 ], 00:17:06.796 "assigned_rate_limits": { 00:17:06.796 "r_mbytes_per_sec": 0, 00:17:06.796 "rw_ios_per_sec": 0, 00:17:06.796 "rw_mbytes_per_sec": 0, 00:17:06.796 "w_mbytes_per_sec": 0 00:17:06.796 }, 00:17:06.796 "block_size": 4096, 00:17:06.796 "claimed": false, 00:17:06.796 "driver_specific": { 00:17:06.796 "lvol": { 00:17:06.796 "base_bdev": "aio_bdev", 00:17:06.796 "clone": false, 00:17:06.796 "esnap_clone": false, 00:17:06.796 "lvol_store_uuid": "4c293820-11bf-4351-82ee-7dd937fb0788", 00:17:06.796 "num_allocated_clusters": 38, 00:17:06.796 "snapshot": false, 00:17:06.796 "thin_provision": false 00:17:06.796 } 00:17:06.796 }, 00:17:06.796 "name": "77daea2c-dc0a-440b-8207-1a75c8023dc8", 00:17:06.796 "num_blocks": 38912, 00:17:06.796 "product_name": "Logical Volume", 00:17:06.796 "supported_io_types": { 00:17:06.796 "abort": false, 00:17:06.796 "compare": false, 00:17:06.796 "compare_and_write": false, 00:17:06.796 "flush": false, 00:17:06.796 "nvme_admin": false, 00:17:06.796 "nvme_io": false, 00:17:06.796 "read": true, 00:17:06.796 "reset": true, 00:17:06.796 "unmap": true, 00:17:06.796 "write": true, 00:17:06.796 "write_zeroes": true 00:17:06.796 }, 00:17:06.796 "uuid": "77daea2c-dc0a-440b-8207-1a75c8023dc8", 00:17:06.796 "zoned": false 00:17:06.796 } 00:17:06.796 ] 00:17:06.796 02:17:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:06.796 02:17:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c293820-11bf-4351-82ee-7dd937fb0788 00:17:06.796 02:17:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:07.055 02:17:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:07.055 02:17:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c293820-11bf-4351-82ee-7dd937fb0788 00:17:07.055 02:17:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:07.313 02:17:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:07.313 02:17:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 77daea2c-dc0a-440b-8207-1a75c8023dc8 00:17:07.585 02:17:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4c293820-11bf-4351-82ee-7dd937fb0788 00:17:07.843 02:17:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:08.100 02:17:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:08.786 00:17:08.786 real 0m21.671s 00:17:08.786 user 0m44.886s 00:17:08.786 sys 0m7.799s 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:08.786 ************************************ 00:17:08.786 END TEST lvs_grow_dirty 00:17:08.786 ************************************ 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:08.786 nvmf_trace.0 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:08.786 rmmod nvme_tcp 00:17:08.786 rmmod nvme_fabrics 00:17:08.786 rmmod nvme_keyring 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.786 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:08.787 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:08.787 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 72067 ']' 00:17:08.787 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 72067 00:17:08.787 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 72067 ']' 00:17:08.787 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 72067 00:17:08.787 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:08.787 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:08.787 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72067 00:17:08.787 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:08.787 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:08.787 killing process with pid 72067 00:17:08.787 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72067' 00:17:08.787 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 72067 00:17:08.787 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 72067 00:17:09.058 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:09.058 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:09.058 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:09.058 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:09.058 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:09.058 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.058 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.058 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.058 02:17:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:09.058 00:17:09.058 real 0m42.431s 00:17:09.058 user 1m10.174s 00:17:09.058 sys 0m10.592s 00:17:09.058 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:09.058 ************************************ 00:17:09.058 END TEST nvmf_lvs_grow 00:17:09.058 02:17:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:09.058 ************************************ 00:17:09.058 02:17:57 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:09.058 02:17:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:09.058 02:17:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:09.058 02:17:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:09.058 ************************************ 00:17:09.058 START TEST nvmf_bdev_io_wait 00:17:09.058 ************************************ 00:17:09.058 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:09.315 * Looking for test storage... 00:17:09.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:09.315 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:09.316 Cannot find device "nvmf_tgt_br" 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:09.316 Cannot find device "nvmf_tgt_br2" 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:09.316 Cannot find device "nvmf_tgt_br" 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:09.316 Cannot find device "nvmf_tgt_br2" 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:09.316 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:09.316 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:09.316 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:09.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:17:09.573 00:17:09.573 --- 10.0.0.2 ping statistics --- 00:17:09.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.573 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:09.573 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:09.573 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:17:09.573 00:17:09.573 --- 10.0.0.3 ping statistics --- 00:17:09.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.573 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:09.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:09.573 00:17:09.573 --- 10.0.0.1 ping statistics --- 00:17:09.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.573 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=72445 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 72445 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 72445 ']' 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:09.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:09.573 02:17:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:09.573 [2024-05-15 02:17:57.535748] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:09.573 [2024-05-15 02:17:57.535829] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.832 [2024-05-15 02:17:57.672079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.832 [2024-05-15 02:17:57.731412] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.832 [2024-05-15 02:17:57.731463] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.832 [2024-05-15 02:17:57.731474] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.832 [2024-05-15 02:17:57.731482] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.832 [2024-05-15 02:17:57.731489] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.832 [2024-05-15 02:17:57.731603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.832 [2024-05-15 02:17:57.731796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.832 [2024-05-15 02:17:57.732382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.832 [2024-05-15 02:17:57.732423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.767 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:10.767 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:10.767 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.767 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.767 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:10.767 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.767 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:10.767 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.767 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:10.767 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.767 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:10.768 [2024-05-15 02:17:58.676879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:10.768 Malloc0 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:10.768 [2024-05-15 02:17:58.719043] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:10.768 [2024-05-15 02:17:58.719399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=72492 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=72494 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:10.768 { 00:17:10.768 "params": { 00:17:10.768 "name": "Nvme$subsystem", 00:17:10.768 "trtype": "$TEST_TRANSPORT", 00:17:10.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.768 "adrfam": "ipv4", 00:17:10.768 "trsvcid": "$NVMF_PORT", 00:17:10.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.768 "hdgst": ${hdgst:-false}, 00:17:10.768 "ddgst": ${ddgst:-false} 00:17:10.768 }, 00:17:10.768 "method": "bdev_nvme_attach_controller" 00:17:10.768 } 00:17:10.768 EOF 00:17:10.768 )") 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=72496 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:10.768 { 00:17:10.768 "params": { 00:17:10.768 "name": "Nvme$subsystem", 00:17:10.768 "trtype": "$TEST_TRANSPORT", 00:17:10.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.768 "adrfam": "ipv4", 00:17:10.768 "trsvcid": "$NVMF_PORT", 00:17:10.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.768 "hdgst": ${hdgst:-false}, 00:17:10.768 "ddgst": ${ddgst:-false} 00:17:10.768 }, 00:17:10.768 "method": "bdev_nvme_attach_controller" 00:17:10.768 } 00:17:10.768 EOF 00:17:10.768 )") 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=72499 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:10.768 { 00:17:10.768 "params": { 00:17:10.768 "name": "Nvme$subsystem", 00:17:10.768 "trtype": "$TEST_TRANSPORT", 00:17:10.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.768 "adrfam": "ipv4", 00:17:10.768 "trsvcid": "$NVMF_PORT", 00:17:10.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.768 "hdgst": ${hdgst:-false}, 00:17:10.768 "ddgst": ${ddgst:-false} 00:17:10.768 }, 00:17:10.768 "method": "bdev_nvme_attach_controller" 00:17:10.768 } 00:17:10.768 EOF 00:17:10.768 )") 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:10.768 "params": { 00:17:10.768 "name": "Nvme1", 00:17:10.768 "trtype": "tcp", 00:17:10.768 "traddr": "10.0.0.2", 00:17:10.768 "adrfam": "ipv4", 00:17:10.768 "trsvcid": "4420", 00:17:10.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.768 "hdgst": false, 00:17:10.768 "ddgst": false 00:17:10.768 }, 00:17:10.768 "method": "bdev_nvme_attach_controller" 00:17:10.768 }' 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:10.768 "params": { 00:17:10.768 "name": "Nvme1", 00:17:10.768 "trtype": "tcp", 00:17:10.768 "traddr": "10.0.0.2", 00:17:10.768 "adrfam": "ipv4", 00:17:10.768 "trsvcid": "4420", 00:17:10.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.768 "hdgst": false, 00:17:10.768 "ddgst": false 00:17:10.768 }, 00:17:10.768 "method": "bdev_nvme_attach_controller" 00:17:10.768 }' 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:10.768 "params": { 00:17:10.768 "name": "Nvme1", 00:17:10.768 "trtype": "tcp", 00:17:10.768 "traddr": "10.0.0.2", 00:17:10.768 "adrfam": "ipv4", 00:17:10.768 "trsvcid": "4420", 00:17:10.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.768 "hdgst": false, 00:17:10.768 "ddgst": false 00:17:10.768 }, 00:17:10.768 "method": "bdev_nvme_attach_controller" 00:17:10.768 }' 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:10.768 { 00:17:10.768 "params": { 00:17:10.768 "name": "Nvme$subsystem", 00:17:10.768 "trtype": "$TEST_TRANSPORT", 00:17:10.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.768 "adrfam": "ipv4", 00:17:10.768 "trsvcid": "$NVMF_PORT", 00:17:10.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.768 "hdgst": ${hdgst:-false}, 00:17:10.768 "ddgst": ${ddgst:-false} 00:17:10.768 }, 00:17:10.768 "method": "bdev_nvme_attach_controller" 00:17:10.768 } 00:17:10.768 EOF 00:17:10.768 )") 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:10.768 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:10.768 "params": { 00:17:10.768 "name": "Nvme1", 00:17:10.768 "trtype": "tcp", 00:17:10.768 "traddr": "10.0.0.2", 00:17:10.768 "adrfam": "ipv4", 00:17:10.768 "trsvcid": "4420", 00:17:10.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.769 "hdgst": false, 00:17:10.769 "ddgst": false 00:17:10.769 }, 00:17:10.769 "method": "bdev_nvme_attach_controller" 00:17:10.769 }' 00:17:11.027 [2024-05-15 02:17:58.779573] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:11.027 [2024-05-15 02:17:58.779653] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:11.027 02:17:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 72492 00:17:11.027 [2024-05-15 02:17:58.790113] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:11.027 [2024-05-15 02:17:58.790186] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:11.027 [2024-05-15 02:17:58.805955] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:11.027 [2024-05-15 02:17:58.806062] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:11.027 [2024-05-15 02:17:58.813864] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:11.027 [2024-05-15 02:17:58.813969] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:11.027 [2024-05-15 02:17:58.952659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.027 [2024-05-15 02:17:59.000826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.027 [2024-05-15 02:17:59.024133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:11.027 [2024-05-15 02:17:59.039096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.284 [2024-05-15 02:17:59.056420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:11.284 [2024-05-15 02:17:59.093421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.284 [2024-05-15 02:17:59.094226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:11.284 [2024-05-15 02:17:59.149535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:11.284 Running I/O for 1 seconds... 00:17:11.284 Running I/O for 1 seconds... 00:17:11.284 Running I/O for 1 seconds... 00:17:11.284 Running I/O for 1 seconds... 00:17:12.228 00:17:12.228 Latency(us) 00:17:12.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.228 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:12.228 Nvme1n1 : 1.01 9823.42 38.37 0.00 0.00 12973.87 2427.81 18588.39 00:17:12.228 =================================================================================================================== 00:17:12.228 Total : 9823.42 38.37 0.00 0.00 12973.87 2427.81 18588.39 00:17:12.228 00:17:12.228 Latency(us) 00:17:12.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.228 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:12.228 Nvme1n1 : 1.01 7237.29 28.27 0.00 0.00 17575.03 5779.08 23473.80 00:17:12.228 =================================================================================================================== 00:17:12.228 Total : 7237.29 28.27 0.00 0.00 17575.03 5779.08 23473.80 00:17:12.228 00:17:12.229 Latency(us) 00:17:12.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.229 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:12.229 Nvme1n1 : 1.00 171447.19 669.72 0.00 0.00 743.65 292.31 1377.75 00:17:12.229 =================================================================================================================== 00:17:12.229 Total : 171447.19 669.72 0.00 0.00 743.65 292.31 1377.75 00:17:12.486 00:17:12.486 Latency(us) 00:17:12.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.486 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:12.486 Nvme1n1 : 1.01 8699.80 33.98 0.00 0.00 14653.75 3351.27 21924.77 00:17:12.486 =================================================================================================================== 00:17:12.486 Total : 8699.80 33.98 0.00 0.00 14653.75 3351.27 21924.77 00:17:12.486 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 72494 00:17:12.486 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 72496 00:17:12.486 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 72499 00:17:12.486 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:12.486 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:12.745 rmmod nvme_tcp 00:17:12.745 rmmod nvme_fabrics 00:17:12.745 rmmod nvme_keyring 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 72445 ']' 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 72445 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 72445 ']' 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 72445 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72445 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:12.745 killing process with pid 72445 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72445' 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 72445 00:17:12.745 [2024-05-15 02:18:00.610253] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:12.745 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 72445 00:17:13.004 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.004 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.004 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.004 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.004 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.005 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.005 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.005 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.005 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:13.005 00:17:13.005 real 0m3.813s 00:17:13.005 user 0m16.731s 00:17:13.005 sys 0m1.946s 00:17:13.005 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:13.005 ************************************ 00:17:13.005 END TEST nvmf_bdev_io_wait 00:17:13.005 ************************************ 00:17:13.005 02:18:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:13.005 02:18:00 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:13.005 02:18:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:13.005 02:18:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:13.005 02:18:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:13.005 ************************************ 00:17:13.005 START TEST nvmf_queue_depth 00:17:13.005 ************************************ 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:13.005 * Looking for test storage... 00:17:13.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:13.005 02:18:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:13.005 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:13.005 Cannot find device "nvmf_tgt_br" 00:17:13.005 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:17:13.005 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:13.263 Cannot find device "nvmf_tgt_br2" 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:13.263 Cannot find device "nvmf_tgt_br" 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:13.263 Cannot find device "nvmf_tgt_br2" 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:13.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:13.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:13.263 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:13.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:17:13.521 00:17:13.521 --- 10.0.0.2 ping statistics --- 00:17:13.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.521 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:13.521 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:13.521 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:17:13.521 00:17:13.521 --- 10.0.0.3 ping statistics --- 00:17:13.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.521 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:13.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:13.521 00:17:13.521 --- 10.0.0.1 ping statistics --- 00:17:13.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.521 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=72712 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 72712 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 72712 ']' 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:13.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:13.521 02:18:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:13.522 [2024-05-15 02:18:01.364302] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:13.522 [2024-05-15 02:18:01.364397] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.522 [2024-05-15 02:18:01.500118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.780 [2024-05-15 02:18:01.558466] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.780 [2024-05-15 02:18:01.558518] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.780 [2024-05-15 02:18:01.558529] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.780 [2024-05-15 02:18:01.558539] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.780 [2024-05-15 02:18:01.558546] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.780 [2024-05-15 02:18:01.558569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.353 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:14.353 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:14.353 02:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:14.353 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:14.353 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:14.629 [2024-05-15 02:18:02.397770] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:14.629 Malloc0 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:14.629 [2024-05-15 02:18:02.448993] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:14.629 [2024-05-15 02:18:02.449220] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=72756 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 72756 /var/tmp/bdevperf.sock 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 72756 ']' 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:14.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:14.629 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:14.629 [2024-05-15 02:18:02.530725] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:14.629 [2024-05-15 02:18:02.530863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72756 ] 00:17:14.888 [2024-05-15 02:18:02.677570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.888 [2024-05-15 02:18:02.749276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.888 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:14.888 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:14.888 02:18:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:14.888 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.888 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:15.147 NVMe0n1 00:17:15.147 02:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.147 02:18:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:15.147 Running I/O for 10 seconds... 00:17:25.109 00:17:25.109 Latency(us) 00:17:25.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.109 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:25.109 Verification LBA range: start 0x0 length 0x4000 00:17:25.109 NVMe0n1 : 10.09 8718.37 34.06 0.00 0.00 116946.85 27882.59 81979.58 00:17:25.109 =================================================================================================================== 00:17:25.109 Total : 8718.37 34.06 0.00 0.00 116946.85 27882.59 81979.58 00:17:25.109 0 00:17:25.368 02:18:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 72756 00:17:25.368 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 72756 ']' 00:17:25.368 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 72756 00:17:25.368 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:25.368 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:25.368 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72756 00:17:25.368 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:25.368 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:25.368 killing process with pid 72756 00:17:25.368 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72756' 00:17:25.368 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 72756 00:17:25.368 Received shutdown signal, test time was about 10.000000 seconds 00:17:25.368 00:17:25.368 Latency(us) 00:17:25.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.368 =================================================================================================================== 00:17:25.368 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:25.368 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 72756 00:17:25.368 02:18:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:25.368 02:18:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:25.368 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:25.368 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:25.626 rmmod nvme_tcp 00:17:25.626 rmmod nvme_fabrics 00:17:25.626 rmmod nvme_keyring 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 72712 ']' 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 72712 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 72712 ']' 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 72712 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72712 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72712' 00:17:25.626 killing process with pid 72712 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 72712 00:17:25.626 [2024-05-15 02:18:13.482641] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:25.626 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 72712 00:17:25.885 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:25.885 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:25.885 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:25.885 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:25.885 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:25.885 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.885 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.885 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.885 02:18:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:25.885 00:17:25.885 real 0m12.840s 00:17:25.885 user 0m22.089s 00:17:25.885 sys 0m1.823s 00:17:25.885 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:25.885 02:18:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:25.885 ************************************ 00:17:25.885 END TEST nvmf_queue_depth 00:17:25.885 ************************************ 00:17:25.885 02:18:13 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:25.885 02:18:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:25.885 02:18:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:25.885 02:18:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:25.885 ************************************ 00:17:25.885 START TEST nvmf_target_multipath 00:17:25.885 ************************************ 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:25.885 * Looking for test storage... 00:17:25.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:25.885 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:25.886 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:26.144 Cannot find device "nvmf_tgt_br" 00:17:26.144 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:17:26.145 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:26.145 Cannot find device "nvmf_tgt_br2" 00:17:26.145 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:17:26.145 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:26.145 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:26.145 Cannot find device "nvmf_tgt_br" 00:17:26.145 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:17:26.145 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:26.145 Cannot find device "nvmf_tgt_br2" 00:17:26.145 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:17:26.145 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:26.145 02:18:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:26.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:26.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:26.145 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:26.403 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:26.403 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:26.403 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:26.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:17:26.404 00:17:26.404 --- 10.0.0.2 ping statistics --- 00:17:26.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.404 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:26.404 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:26.404 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:17:26.404 00:17:26.404 --- 10.0.0.3 ping statistics --- 00:17:26.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.404 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:26.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:26.404 00:17:26.404 --- 10.0.0.1 ping statistics --- 00:17:26.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.404 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=72998 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 72998 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@827 -- # '[' -z 72998 ']' 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:26.404 02:18:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 [2024-05-15 02:18:14.343823] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:26.404 [2024-05-15 02:18:14.343928] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.662 [2024-05-15 02:18:14.481756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:26.662 [2024-05-15 02:18:14.555077] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.662 [2024-05-15 02:18:14.555134] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.662 [2024-05-15 02:18:14.555148] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.662 [2024-05-15 02:18:14.555158] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.662 [2024-05-15 02:18:14.555167] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.662 [2024-05-15 02:18:14.555259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.662 [2024-05-15 02:18:14.555602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.662 [2024-05-15 02:18:14.555906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:26.662 [2024-05-15 02:18:14.555927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.596 02:18:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:27.596 02:18:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@860 -- # return 0 00:17:27.596 02:18:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:27.596 02:18:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.596 02:18:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 02:18:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.596 02:18:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:27.854 [2024-05-15 02:18:15.722872] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.854 02:18:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:28.112 Malloc0 00:17:28.112 02:18:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:17:28.371 02:18:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:28.643 02:18:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.902 [2024-05-15 02:18:16.863887] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:28.902 [2024-05-15 02:18:16.864170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.902 02:18:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:29.160 [2024-05-15 02:18:17.100325] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:29.160 02:18:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:17:29.418 02:18:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:17:29.676 02:18:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:17:29.676 02:18:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1194 -- # local i=0 00:17:29.676 02:18:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:17:29.676 02:18:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:17:29.676 02:18:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # sleep 2 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # return 0 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=73111 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:17:31.633 02:18:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:17:31.633 [global] 00:17:31.633 thread=1 00:17:31.633 invalidate=1 00:17:31.633 rw=randrw 00:17:31.633 time_based=1 00:17:31.633 runtime=6 00:17:31.633 ioengine=libaio 00:17:31.633 direct=1 00:17:31.633 bs=4096 00:17:31.633 iodepth=128 00:17:31.633 norandommap=0 00:17:31.633 numjobs=1 00:17:31.633 00:17:31.633 verify_dump=1 00:17:31.633 verify_backlog=512 00:17:31.633 verify_state_save=0 00:17:31.633 do_verify=1 00:17:31.633 verify=crc32c-intel 00:17:31.633 [job0] 00:17:31.633 filename=/dev/nvme0n1 00:17:31.633 Could not set queue depth (nvme0n1) 00:17:31.892 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:31.892 fio-3.35 00:17:31.892 Starting 1 thread 00:17:32.827 02:18:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:33.085 02:18:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:33.344 02:18:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:17:33.344 02:18:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:17:33.344 02:18:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:33.344 02:18:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:33.344 02:18:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:33.344 02:18:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:33.344 02:18:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:17:33.344 02:18:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:17:33.344 02:18:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:33.344 02:18:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:33.344 02:18:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:33.344 02:18:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:33.344 02:18:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:17:34.280 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:17:34.280 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:34.280 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:34.280 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:34.544 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:34.802 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:17:34.802 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:17:34.802 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:34.802 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:34.802 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:34.802 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:34.802 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:17:34.802 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:17:34.802 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:34.802 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:34.802 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:34.802 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:34.802 02:18:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:17:35.736 02:18:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:17:35.736 02:18:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:35.736 02:18:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:35.736 02:18:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 73111 00:17:38.267 00:17:38.267 job0: (groupid=0, jobs=1): err= 0: pid=73132: Wed May 15 02:18:25 2024 00:17:38.267 read: IOPS=10.7k, BW=41.8MiB/s (43.9MB/s)(251MiB/6006msec) 00:17:38.267 slat (usec): min=2, max=6507, avg=53.75, stdev=238.53 00:17:38.267 clat (usec): min=799, max=18615, avg=8149.73, stdev=1309.23 00:17:38.267 lat (usec): min=816, max=18628, avg=8203.48, stdev=1320.42 00:17:38.267 clat percentiles (usec): 00:17:38.267 | 1.00th=[ 4883], 5.00th=[ 6390], 10.00th=[ 6980], 20.00th=[ 7373], 00:17:38.267 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8225], 00:17:38.267 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[10552], 00:17:38.267 | 99.00th=[12256], 99.50th=[13304], 99.90th=[15533], 99.95th=[17171], 00:17:38.267 | 99.99th=[18220] 00:17:38.267 bw ( KiB/s): min= 9296, max=29016, per=51.62%, avg=22112.73, stdev=6376.18, samples=11 00:17:38.267 iops : min= 2324, max= 7254, avg=5528.18, stdev=1594.05, samples=11 00:17:38.267 write: IOPS=6418, BW=25.1MiB/s (26.3MB/s)(132MiB/5281msec); 0 zone resets 00:17:38.267 slat (usec): min=4, max=2576, avg=64.60, stdev=158.95 00:17:38.267 clat (usec): min=754, max=18007, avg=7010.02, stdev=1070.93 00:17:38.267 lat (usec): min=812, max=18046, avg=7074.61, stdev=1075.25 00:17:38.267 clat percentiles (usec): 00:17:38.267 | 1.00th=[ 3916], 5.00th=[ 5211], 10.00th=[ 5932], 20.00th=[ 6390], 00:17:38.267 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7177], 00:17:38.267 | 70.00th=[ 7373], 80.00th=[ 7635], 90.00th=[ 8029], 95.00th=[ 8717], 00:17:38.267 | 99.00th=[10159], 99.50th=[10814], 99.90th=[12518], 99.95th=[13304], 00:17:38.267 | 99.99th=[15270] 00:17:38.267 bw ( KiB/s): min= 9768, max=28336, per=86.49%, avg=22205.64, stdev=6048.47, samples=11 00:17:38.267 iops : min= 2442, max= 7084, avg=5551.36, stdev=1512.09, samples=11 00:17:38.267 lat (usec) : 1000=0.01% 00:17:38.267 lat (msec) : 2=0.01%, 4=0.55%, 10=94.27%, 20=5.17% 00:17:38.267 cpu : usr=5.60%, sys=23.50%, ctx=6308, majf=0, minf=121 00:17:38.267 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:17:38.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:38.267 issued rwts: total=64322,33897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:38.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:38.267 00:17:38.267 Run status group 0 (all jobs): 00:17:38.267 READ: bw=41.8MiB/s (43.9MB/s), 41.8MiB/s-41.8MiB/s (43.9MB/s-43.9MB/s), io=251MiB (263MB), run=6006-6006msec 00:17:38.267 WRITE: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=132MiB (139MB), run=5281-5281msec 00:17:38.267 00:17:38.267 Disk stats (read/write): 00:17:38.267 nvme0n1: ios=63590/33030, merge=0/0, ticks=485250/215813, in_queue=701063, util=98.63% 00:17:38.267 02:18:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:38.267 02:18:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:38.526 02:18:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:17:38.526 02:18:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:17:38.526 02:18:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:38.526 02:18:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:38.526 02:18:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:38.526 02:18:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:38.526 02:18:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:17:38.526 02:18:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:17:38.526 02:18:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:38.526 02:18:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:38.526 02:18:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:38.526 02:18:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:17:38.526 02:18:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:17:39.482 02:18:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:17:39.482 02:18:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:39.482 02:18:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:39.482 02:18:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:17:39.482 02:18:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=73217 00:17:39.482 02:18:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:17:39.482 02:18:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:17:39.482 [global] 00:17:39.482 thread=1 00:17:39.482 invalidate=1 00:17:39.482 rw=randrw 00:17:39.482 time_based=1 00:17:39.482 runtime=6 00:17:39.482 ioengine=libaio 00:17:39.482 direct=1 00:17:39.482 bs=4096 00:17:39.482 iodepth=128 00:17:39.482 norandommap=0 00:17:39.482 numjobs=1 00:17:39.482 00:17:39.482 verify_dump=1 00:17:39.482 verify_backlog=512 00:17:39.482 verify_state_save=0 00:17:39.482 do_verify=1 00:17:39.482 verify=crc32c-intel 00:17:39.482 [job0] 00:17:39.482 filename=/dev/nvme0n1 00:17:39.740 Could not set queue depth (nvme0n1) 00:17:39.740 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:39.740 fio-3.35 00:17:39.740 Starting 1 thread 00:17:40.674 02:18:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:40.932 02:18:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:40.932 02:18:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:17:40.932 02:18:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:17:40.932 02:18:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:40.932 02:18:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:40.932 02:18:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:40.932 02:18:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:40.932 02:18:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:17:40.932 02:18:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:17:40.932 02:18:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:40.932 02:18:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:40.932 02:18:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:40.932 02:18:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:40.932 02:18:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:17:42.306 02:18:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:17:42.306 02:18:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:42.306 02:18:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:42.306 02:18:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:42.306 02:18:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:42.564 02:18:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:17:42.564 02:18:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:17:42.564 02:18:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:42.564 02:18:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:42.564 02:18:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:42.564 02:18:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:42.564 02:18:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:17:42.565 02:18:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:17:42.565 02:18:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:42.565 02:18:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:42.565 02:18:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:42.565 02:18:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:42.565 02:18:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:17:43.499 02:18:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:17:43.499 02:18:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:43.499 02:18:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:43.499 02:18:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 73217 00:17:46.036 00:17:46.036 job0: (groupid=0, jobs=1): err= 0: pid=73238: Wed May 15 02:18:33 2024 00:17:46.036 read: IOPS=11.9k, BW=46.3MiB/s (48.6MB/s)(278MiB/6004msec) 00:17:46.036 slat (usec): min=4, max=5393, avg=42.89, stdev=202.96 00:17:46.036 clat (usec): min=263, max=19246, avg=7430.14, stdev=1810.55 00:17:46.036 lat (usec): min=274, max=19254, avg=7473.04, stdev=1825.88 00:17:46.036 clat percentiles (usec): 00:17:46.036 | 1.00th=[ 2278], 5.00th=[ 3916], 10.00th=[ 4883], 20.00th=[ 6194], 00:17:46.036 | 30.00th=[ 7046], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7898], 00:17:46.036 | 70.00th=[ 8291], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[ 9896], 00:17:46.036 | 99.00th=[11731], 99.50th=[12256], 99.90th=[15664], 99.95th=[16712], 00:17:46.036 | 99.99th=[17957] 00:17:46.036 bw ( KiB/s): min= 8584, max=36376, per=53.42%, avg=25339.64, stdev=8790.04, samples=11 00:17:46.036 iops : min= 2146, max= 9094, avg=6334.91, stdev=2197.51, samples=11 00:17:46.036 write: IOPS=7010, BW=27.4MiB/s (28.7MB/s)(147MiB/5365msec); 0 zone resets 00:17:46.036 slat (usec): min=13, max=3725, avg=54.38, stdev=130.45 00:17:46.036 clat (usec): min=241, max=15779, avg=6193.31, stdev=1681.04 00:17:46.036 lat (usec): min=319, max=15799, avg=6247.69, stdev=1691.79 00:17:46.036 clat percentiles (usec): 00:17:46.036 | 1.00th=[ 1909], 5.00th=[ 3032], 10.00th=[ 3621], 20.00th=[ 4621], 00:17:46.036 | 30.00th=[ 5669], 40.00th=[ 6325], 50.00th=[ 6652], 60.00th=[ 6915], 00:17:46.036 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 8094], 00:17:46.036 | 99.00th=[10028], 99.50th=[10552], 99.90th=[12387], 99.95th=[13304], 00:17:46.036 | 99.99th=[15533] 00:17:46.036 bw ( KiB/s): min= 8640, max=36790, per=90.24%, avg=25303.82, stdev=8573.45, samples=11 00:17:46.036 iops : min= 2160, max= 9197, avg=6325.91, stdev=2143.30, samples=11 00:17:46.036 lat (usec) : 250=0.01%, 500=0.03%, 750=0.09%, 1000=0.12% 00:17:46.036 lat (msec) : 2=0.56%, 4=7.46%, 10=88.46%, 20=3.28% 00:17:46.036 cpu : usr=6.93%, sys=26.39%, ctx=7688, majf=0, minf=96 00:17:46.036 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:17:46.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:46.036 issued rwts: total=71204,37609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.036 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:46.036 00:17:46.036 Run status group 0 (all jobs): 00:17:46.036 READ: bw=46.3MiB/s (48.6MB/s), 46.3MiB/s-46.3MiB/s (48.6MB/s-48.6MB/s), io=278MiB (292MB), run=6004-6004msec 00:17:46.036 WRITE: bw=27.4MiB/s (28.7MB/s), 27.4MiB/s-27.4MiB/s (28.7MB/s-28.7MB/s), io=147MiB (154MB), run=5365-5365msec 00:17:46.036 00:17:46.036 Disk stats (read/write): 00:17:46.036 nvme0n1: ios=69952/37362, merge=0/0, ticks=481203/210017, in_queue=691220, util=98.62% 00:17:46.036 02:18:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:46.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:46.036 02:18:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:46.036 02:18:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1215 -- # local i=0 00:17:46.036 02:18:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:17:46.036 02:18:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:46.036 02:18:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:17:46.036 02:18:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:46.036 02:18:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # return 0 00:17:46.036 02:18:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:46.342 02:18:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:17:46.342 02:18:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:17:46.342 02:18:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:17:46.342 02:18:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:46.343 rmmod nvme_tcp 00:17:46.343 rmmod nvme_fabrics 00:17:46.343 rmmod nvme_keyring 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 72998 ']' 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 72998 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@946 -- # '[' -z 72998 ']' 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@950 -- # kill -0 72998 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # uname 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72998 00:17:46.343 killing process with pid 72998 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72998' 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@965 -- # kill 72998 00:17:46.343 [2024-05-15 02:18:34.220169] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:46.343 02:18:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@970 -- # wait 72998 00:17:46.617 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:46.617 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:46.617 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:46.617 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:46.617 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:46.617 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.617 02:18:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.617 02:18:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.617 02:18:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:46.617 ************************************ 00:17:46.617 END TEST nvmf_target_multipath 00:17:46.617 ************************************ 00:17:46.617 00:17:46.617 real 0m20.697s 00:17:46.617 user 1m21.525s 00:17:46.617 sys 0m6.593s 00:17:46.617 02:18:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:46.617 02:18:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:46.617 02:18:34 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:46.618 02:18:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:46.618 02:18:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:46.618 02:18:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:46.618 ************************************ 00:17:46.618 START TEST nvmf_zcopy 00:17:46.618 ************************************ 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:46.618 * Looking for test storage... 00:17:46.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:46.618 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:46.877 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:46.877 Cannot find device "nvmf_tgt_br" 00:17:46.877 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:17:46.877 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.877 Cannot find device "nvmf_tgt_br2" 00:17:46.877 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:17:46.877 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:46.877 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:46.878 Cannot find device "nvmf_tgt_br" 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:46.878 Cannot find device "nvmf_tgt_br2" 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:46.878 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:47.136 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:47.136 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:47.136 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:47.136 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:47.136 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:47.136 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:47.136 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:47.136 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:47.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:17:47.136 00:17:47.136 --- 10.0.0.2 ping statistics --- 00:17:47.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.136 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:47.136 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:47.136 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:47.136 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:17:47.136 00:17:47.136 --- 10.0.0.3 ping statistics --- 00:17:47.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.136 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:47.136 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:47.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:47.137 00:17:47.137 --- 10.0.0.1 ping statistics --- 00:17:47.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.137 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:47.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=73474 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 73474 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 73474 ']' 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:47.137 02:18:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:47.137 [2024-05-15 02:18:35.043697] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:47.137 [2024-05-15 02:18:35.043961] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.395 [2024-05-15 02:18:35.179731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.395 [2024-05-15 02:18:35.240123] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.395 [2024-05-15 02:18:35.240351] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.395 [2024-05-15 02:18:35.240600] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.395 [2024-05-15 02:18:35.240726] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.395 [2024-05-15 02:18:35.240968] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.395 [2024-05-15 02:18:35.241100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:48.330 [2024-05-15 02:18:36.106221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:48.330 [2024-05-15 02:18:36.126120] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:48.330 [2024-05-15 02:18:36.126367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:48.330 malloc0 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:48.330 { 00:17:48.330 "params": { 00:17:48.330 "name": "Nvme$subsystem", 00:17:48.330 "trtype": "$TEST_TRANSPORT", 00:17:48.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:48.330 "adrfam": "ipv4", 00:17:48.330 "trsvcid": "$NVMF_PORT", 00:17:48.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:48.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:48.330 "hdgst": ${hdgst:-false}, 00:17:48.330 "ddgst": ${ddgst:-false} 00:17:48.330 }, 00:17:48.330 "method": "bdev_nvme_attach_controller" 00:17:48.330 } 00:17:48.330 EOF 00:17:48.330 )") 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:48.330 02:18:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:48.330 "params": { 00:17:48.330 "name": "Nvme1", 00:17:48.330 "trtype": "tcp", 00:17:48.330 "traddr": "10.0.0.2", 00:17:48.330 "adrfam": "ipv4", 00:17:48.330 "trsvcid": "4420", 00:17:48.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:48.330 "hdgst": false, 00:17:48.330 "ddgst": false 00:17:48.330 }, 00:17:48.330 "method": "bdev_nvme_attach_controller" 00:17:48.330 }' 00:17:48.330 [2024-05-15 02:18:36.208811] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:48.331 [2024-05-15 02:18:36.208927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73519 ] 00:17:48.589 [2024-05-15 02:18:36.349875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.589 [2024-05-15 02:18:36.423005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.589 Running I/O for 10 seconds... 00:18:00.813 00:18:00.813 Latency(us) 00:18:00.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.813 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:00.813 Verification LBA range: start 0x0 length 0x1000 00:18:00.813 Nvme1n1 : 10.02 5936.58 46.38 0.00 0.00 21489.13 3530.01 30742.34 00:18:00.813 =================================================================================================================== 00:18:00.813 Total : 5936.58 46.38 0.00 0.00 21489.13 3530.01 30742.34 00:18:00.813 02:18:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=73574 00:18:00.813 02:18:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:00.813 02:18:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:00.813 02:18:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:00.813 02:18:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:00.813 02:18:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:00.813 02:18:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:00.813 02:18:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:00.813 02:18:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:00.813 { 00:18:00.813 "params": { 00:18:00.813 "name": "Nvme$subsystem", 00:18:00.813 "trtype": "$TEST_TRANSPORT", 00:18:00.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:00.813 "adrfam": "ipv4", 00:18:00.813 "trsvcid": "$NVMF_PORT", 00:18:00.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:00.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:00.813 "hdgst": ${hdgst:-false}, 00:18:00.813 "ddgst": ${ddgst:-false} 00:18:00.813 }, 00:18:00.813 "method": "bdev_nvme_attach_controller" 00:18:00.813 } 00:18:00.813 EOF 00:18:00.813 )") 00:18:00.813 02:18:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:00.813 [2024-05-15 02:18:46.788815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.813 [2024-05-15 02:18:46.789017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.813 02:18:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:00.813 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.813 02:18:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:00.813 02:18:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:00.813 "params": { 00:18:00.813 "name": "Nvme1", 00:18:00.813 "trtype": "tcp", 00:18:00.813 "traddr": "10.0.0.2", 00:18:00.813 "adrfam": "ipv4", 00:18:00.813 "trsvcid": "4420", 00:18:00.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.813 "hdgst": false, 00:18:00.813 "ddgst": false 00:18:00.813 }, 00:18:00.813 "method": "bdev_nvme_attach_controller" 00:18:00.813 }' 00:18:00.814 [2024-05-15 02:18:46.800796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.800967] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.812788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.812942] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.824793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.824828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.836790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.836821] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.848149] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:00.814 [2024-05-15 02:18:46.848262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73574 ] 00:18:00.814 [2024-05-15 02:18:46.848785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.848817] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.860786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.860816] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.872809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.872855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.884807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.884840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.896819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.896855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.908811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.908845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.916812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.916846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.924812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.924846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.932814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.932844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.944812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.944842] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.956814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.956844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.968815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.968844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.980823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.980855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:46.992818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:46.992846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 [2024-05-15 02:18:46.993580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.814 2024/05/15 02:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:47.004854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:47.004900] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:47.016838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:47.016878] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:47.028877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:47.028918] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:47.040841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:47.040879] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.814 [2024-05-15 02:18:47.052838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.814 [2024-05-15 02:18:47.052872] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.814 [2024-05-15 02:18:47.052940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.064881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.064924] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.076893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.076938] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.088886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.088926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.100896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.100939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.112880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.112927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.120880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.120920] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.132894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.132935] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.144895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.144929] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.156899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.156934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.164898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.164933] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.172889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.172924] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.180921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.180957] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.188901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.188932] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 Running I/O for 5 seconds... 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.196905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.196935] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.212190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.212232] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.226604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.226642] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.241935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.241973] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.257314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.257352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.268515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.268554] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.286318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.286357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.296794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.296832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.311464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.311502] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.326931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.326971] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.815 [2024-05-15 02:18:47.337478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.815 [2024-05-15 02:18:47.337515] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.815 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.352361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.352414] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.369517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.369554] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.384791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.384830] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.401607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.401643] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.417605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.417645] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.434561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.434601] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.449921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.449960] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.465399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.465435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.475373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.475426] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.491655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.491694] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.507963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.508003] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.524463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.524502] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.541012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.541052] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.556658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.556696] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.573665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.573703] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.590581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.590619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.606031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.606070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.616663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.616701] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.627137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.627177] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.637789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.637827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.652346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.652401] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.670561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.670599] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.685278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.685316] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.695512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.695676] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.710190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.710344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.816 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.816 [2024-05-15 02:18:47.727522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.816 [2024-05-15 02:18:47.727680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.742540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.742577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.758203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.758242] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.767927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.767965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.782303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.782342] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.797442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.797479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.813579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.813618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.824133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.824171] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.835086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.835125] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.852453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.852490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.867691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.867730] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.884969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.885013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.905346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.905400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.920240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.920280] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.937954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.937993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.953203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.953242] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.963194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.963231] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.974548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.974586] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:47.993961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:47.994000] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:48.011984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:48.012023] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:48.026512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:48.026550] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:48.043564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:48.043612] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:48.059626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:48.059663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:48.075271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:48.075338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:48.091272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:48.091336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:48.108436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:48.108500] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.817 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.817 [2024-05-15 02:18:48.124572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.817 [2024-05-15 02:18:48.124639] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.140706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.140770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.157030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.157069] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.175002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.175046] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.190023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.190075] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.200789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.200826] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.215172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.215212] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.230943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.230993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.241538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.241575] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.256059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.256101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.265916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.265954] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.281261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.281300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.299352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.299439] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.314839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.314906] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.332517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.332580] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.348056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.348110] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.364841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.364888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.380226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.380269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.390705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.390759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.405734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.405773] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.422188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.422253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.818 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.818 [2024-05-15 02:18:48.438279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.818 [2024-05-15 02:18:48.438334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.448767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.448805] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.463214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.463254] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.473727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.473765] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.488804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.488843] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.500723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.500761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.516816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.516854] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.533219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.533258] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.550561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.550600] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.566535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.566580] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.577241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.577282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.592350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.592435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.609548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.609593] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.625966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.626004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.641548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.641585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.652033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.652070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.667168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.667208] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.685278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.685324] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.699978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.700017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.716687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.716740] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.732192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.732232] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.747747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.747800] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.764434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.764473] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.780987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.781026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.797598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.797637] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:00.819 [2024-05-15 02:18:48.813911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.819 [2024-05-15 02:18:48.813957] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.819 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.079 [2024-05-15 02:18:48.830029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.079 [2024-05-15 02:18:48.830093] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.079 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.079 [2024-05-15 02:18:48.839775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.079 [2024-05-15 02:18:48.839815] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.079 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.079 [2024-05-15 02:18:48.850911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.079 [2024-05-15 02:18:48.850948] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.079 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.079 [2024-05-15 02:18:48.868506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.079 [2024-05-15 02:18:48.868570] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.079 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.079 [2024-05-15 02:18:48.883096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.079 [2024-05-15 02:18:48.883155] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.079 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.079 [2024-05-15 02:18:48.900893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.079 [2024-05-15 02:18:48.900960] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.079 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.079 [2024-05-15 02:18:48.915896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.079 [2024-05-15 02:18:48.915938] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.079 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.079 [2024-05-15 02:18:48.933667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.079 [2024-05-15 02:18:48.933708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.079 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.079 [2024-05-15 02:18:48.944284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.079 [2024-05-15 02:18:48.944322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.079 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.079 [2024-05-15 02:18:48.954897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.079 [2024-05-15 02:18:48.954936] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.079 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.079 [2024-05-15 02:18:48.967747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.079 [2024-05-15 02:18:48.967786] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.079 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.080 [2024-05-15 02:18:48.978038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.080 [2024-05-15 02:18:48.978081] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.080 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.080 [2024-05-15 02:18:48.988396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.080 [2024-05-15 02:18:48.988432] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.080 2024/05/15 02:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.080 [2024-05-15 02:18:49.003172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.080 [2024-05-15 02:18:49.003216] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.080 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.080 [2024-05-15 02:18:49.013345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.080 [2024-05-15 02:18:49.013397] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.080 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.080 [2024-05-15 02:18:49.028293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.080 [2024-05-15 02:18:49.028343] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.080 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.080 [2024-05-15 02:18:49.048618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.080 [2024-05-15 02:18:49.048663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.080 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.080 [2024-05-15 02:18:49.059409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.080 [2024-05-15 02:18:49.059450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.080 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.080 [2024-05-15 02:18:49.074293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.080 [2024-05-15 02:18:49.074335] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.080 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.080 [2024-05-15 02:18:49.092059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.080 [2024-05-15 02:18:49.092129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.339 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.339 [2024-05-15 02:18:49.107996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.339 [2024-05-15 02:18:49.108042] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.339 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.339 [2024-05-15 02:18:49.125066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.339 [2024-05-15 02:18:49.125112] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.339 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.339 [2024-05-15 02:18:49.140544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.339 [2024-05-15 02:18:49.140584] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.339 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.339 [2024-05-15 02:18:49.150124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.339 [2024-05-15 02:18:49.150163] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.339 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.339 [2024-05-15 02:18:49.165976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.339 [2024-05-15 02:18:49.166026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.339 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.339 [2024-05-15 02:18:49.181449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.339 [2024-05-15 02:18:49.181497] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.340 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.340 [2024-05-15 02:18:49.197636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.340 [2024-05-15 02:18:49.197840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.340 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.340 [2024-05-15 02:18:49.214931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.340 [2024-05-15 02:18:49.214984] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.340 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.340 [2024-05-15 02:18:49.230955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.340 [2024-05-15 02:18:49.231008] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.340 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.340 [2024-05-15 02:18:49.247943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.340 [2024-05-15 02:18:49.248004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.340 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.340 [2024-05-15 02:18:49.264148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.340 [2024-05-15 02:18:49.264205] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.340 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.340 [2024-05-15 02:18:49.280970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.340 [2024-05-15 02:18:49.281041] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.340 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.340 [2024-05-15 02:18:49.297643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.340 [2024-05-15 02:18:49.297693] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.340 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.340 [2024-05-15 02:18:49.313259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.340 [2024-05-15 02:18:49.313298] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.340 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.340 [2024-05-15 02:18:49.328925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.340 [2024-05-15 02:18:49.328963] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.340 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.340 [2024-05-15 02:18:49.339376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.340 [2024-05-15 02:18:49.339427] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.340 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.354170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.354211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.369959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.370002] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.379998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.380040] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.394483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.394548] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.407979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.408019] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.424400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.424442] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.440906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.440946] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.458016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.458054] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.474908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.474948] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.490173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.490212] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.505819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.505857] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.516238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.516277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.531279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.531317] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.548016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.548054] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.563876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.563913] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.580720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.580759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.599 [2024-05-15 02:18:49.596916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.599 [2024-05-15 02:18:49.596954] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.599 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.859 [2024-05-15 02:18:49.615770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.859 [2024-05-15 02:18:49.615812] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.859 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.859 [2024-05-15 02:18:49.630256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.859 [2024-05-15 02:18:49.630318] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.859 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.859 [2024-05-15 02:18:49.647358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.859 [2024-05-15 02:18:49.647414] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.859 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.859 [2024-05-15 02:18:49.662807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.859 [2024-05-15 02:18:49.662846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.859 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.859 [2024-05-15 02:18:49.673401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.859 [2024-05-15 02:18:49.673436] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.859 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.859 [2024-05-15 02:18:49.684077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.859 [2024-05-15 02:18:49.684116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.859 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.859 [2024-05-15 02:18:49.694842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.859 [2024-05-15 02:18:49.694880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.859 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.859 [2024-05-15 02:18:49.705815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.859 [2024-05-15 02:18:49.705855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.859 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.859 [2024-05-15 02:18:49.723748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.859 [2024-05-15 02:18:49.723802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.859 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.859 [2024-05-15 02:18:49.738725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.859 [2024-05-15 02:18:49.738764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.859 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.859 [2024-05-15 02:18:49.755762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.859 [2024-05-15 02:18:49.755801] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.859 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.859 [2024-05-15 02:18:49.771052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.859 [2024-05-15 02:18:49.771091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.859 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.859 [2024-05-15 02:18:49.781265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.859 [2024-05-15 02:18:49.781303] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.859 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.859 [2024-05-15 02:18:49.797254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.859 [2024-05-15 02:18:49.797307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.859 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.859 [2024-05-15 02:18:49.813521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.859 [2024-05-15 02:18:49.813563] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.860 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.860 [2024-05-15 02:18:49.824042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.860 [2024-05-15 02:18:49.824083] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.860 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.860 [2024-05-15 02:18:49.839947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.860 [2024-05-15 02:18:49.839994] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.860 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.860 [2024-05-15 02:18:49.857041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.860 [2024-05-15 02:18:49.857115] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.860 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:01.860 [2024-05-15 02:18:49.872819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.860 [2024-05-15 02:18:49.872860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.860 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.118 [2024-05-15 02:18:49.883512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:49.883549] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:49.894272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:49.894312] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:49.910093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:49.910140] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:49.926340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:49.926381] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:49.942067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:49.942114] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:49.957817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:49.957858] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:49.968191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:49.968229] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:49.983245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:49.983284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:49.992601] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:49.992639] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:50.008802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:50.008851] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:50.025652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:50.025699] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:50.041585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:50.041630] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:50.060221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:50.060269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:50.070674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:50.070712] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:50.081449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:50.081486] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:50.099299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:50.099338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:50.115251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:50.115290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.119 [2024-05-15 02:18:50.125367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.119 [2024-05-15 02:18:50.125414] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.119 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.379 [2024-05-15 02:18:50.140091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.379 [2024-05-15 02:18:50.140129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.379 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.379 [2024-05-15 02:18:50.155276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.379 [2024-05-15 02:18:50.155311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.379 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.379 [2024-05-15 02:18:50.165077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.379 [2024-05-15 02:18:50.165117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.379 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.379 [2024-05-15 02:18:50.181111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.379 [2024-05-15 02:18:50.181156] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.379 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.379 [2024-05-15 02:18:50.198262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.379 [2024-05-15 02:18:50.198297] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.379 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.379 [2024-05-15 02:18:50.213958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.379 [2024-05-15 02:18:50.214004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.379 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.379 [2024-05-15 02:18:50.230888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.379 [2024-05-15 02:18:50.231058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.379 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.379 [2024-05-15 02:18:50.241170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.379 [2024-05-15 02:18:50.241210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.379 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.379 [2024-05-15 02:18:50.252169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.379 [2024-05-15 02:18:50.252211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.379 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.379 [2024-05-15 02:18:50.264965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.379 [2024-05-15 02:18:50.265007] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.379 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.379 [2024-05-15 02:18:50.282781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.379 [2024-05-15 02:18:50.282824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.379 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.379 [2024-05-15 02:18:50.297523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.379 [2024-05-15 02:18:50.297565] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.379 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.379 [2024-05-15 02:18:50.314603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.379 [2024-05-15 02:18:50.314641] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.379 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.379 [2024-05-15 02:18:50.329513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.379 [2024-05-15 02:18:50.329552] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.379 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.379 [2024-05-15 02:18:50.347188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.379 [2024-05-15 02:18:50.347227] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.380 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.380 [2024-05-15 02:18:50.361871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.380 [2024-05-15 02:18:50.361911] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.380 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.380 [2024-05-15 02:18:50.378869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.380 [2024-05-15 02:18:50.378928] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.380 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.639 [2024-05-15 02:18:50.393843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.639 [2024-05-15 02:18:50.393885] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.639 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.639 [2024-05-15 02:18:50.411668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.639 [2024-05-15 02:18:50.411708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.639 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.639 [2024-05-15 02:18:50.427066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.639 [2024-05-15 02:18:50.427107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.639 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.639 [2024-05-15 02:18:50.444250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.639 [2024-05-15 02:18:50.444295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.639 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.639 [2024-05-15 02:18:50.460725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.639 [2024-05-15 02:18:50.460777] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.639 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.639 [2024-05-15 02:18:50.478193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.639 [2024-05-15 02:18:50.478236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.639 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.639 [2024-05-15 02:18:50.493527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.639 [2024-05-15 02:18:50.493568] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.639 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.639 [2024-05-15 02:18:50.505231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.639 [2024-05-15 02:18:50.505282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.639 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.639 [2024-05-15 02:18:50.520151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.639 [2024-05-15 02:18:50.520193] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.639 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.639 [2024-05-15 02:18:50.529802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.639 [2024-05-15 02:18:50.529844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.639 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.639 [2024-05-15 02:18:50.544702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.639 [2024-05-15 02:18:50.544753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.639 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.639 [2024-05-15 02:18:50.557145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.639 [2024-05-15 02:18:50.557186] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.639 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.639 [2024-05-15 02:18:50.574077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.639 [2024-05-15 02:18:50.574117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.640 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.640 [2024-05-15 02:18:50.589003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.640 [2024-05-15 02:18:50.589059] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.640 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.640 [2024-05-15 02:18:50.604479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.640 [2024-05-15 02:18:50.604530] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.640 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.640 [2024-05-15 02:18:50.614511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.640 [2024-05-15 02:18:50.614548] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.640 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.640 [2024-05-15 02:18:50.629217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.640 [2024-05-15 02:18:50.629255] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.640 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.640 [2024-05-15 02:18:50.639842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.640 [2024-05-15 02:18:50.639882] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.640 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.640 [2024-05-15 02:18:50.651178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.640 [2024-05-15 02:18:50.651215] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.667176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.667221] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.677423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.677460] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.688970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.689008] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.704692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.704731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.721433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.721483] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.736580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.736618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.753403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.753440] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.763782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.763819] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.778489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.778527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.795170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.795222] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.812110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.812150] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.827339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.827402] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.837254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.837309] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.851814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.851869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.861754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.861793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.876234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.876274] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.886773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.886810] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.901232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.901269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.900 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:02.900 [2024-05-15 02:18:50.911477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.900 [2024-05-15 02:18:50.911514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.901 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.161 [2024-05-15 02:18:50.925859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:50.925896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:50.936288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:50.936326] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:50.950705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:50.950746] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:50.967662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:50.967703] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:50.982922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:50.982963] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:50.998626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:50.998665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:51.009271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:51.009308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:51.024350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:51.024429] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:51.039921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:51.039969] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:51.052318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:51.052356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:51.069830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:51.069868] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:51.085107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:51.085145] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:51.101847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:51.101885] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:51.118453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:51.118490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:51.135165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:51.135203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:51.152315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:51.152362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.162 [2024-05-15 02:18:51.167977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.162 [2024-05-15 02:18:51.168019] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.162 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.421 [2024-05-15 02:18:51.183720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.421 [2024-05-15 02:18:51.183774] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.421 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.421 [2024-05-15 02:18:51.199964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.421 [2024-05-15 02:18:51.200003] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.421 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.421 [2024-05-15 02:18:51.210370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.421 [2024-05-15 02:18:51.210423] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.421 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.421 [2024-05-15 02:18:51.225565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.421 [2024-05-15 02:18:51.225603] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.421 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.421 [2024-05-15 02:18:51.241375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.421 [2024-05-15 02:18:51.241425] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.421 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.421 [2024-05-15 02:18:51.256487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.421 [2024-05-15 02:18:51.256525] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.421 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.421 [2024-05-15 02:18:51.272248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.421 [2024-05-15 02:18:51.272283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.422 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.422 [2024-05-15 02:18:51.289876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.422 [2024-05-15 02:18:51.289915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.422 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.422 [2024-05-15 02:18:51.305716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.422 [2024-05-15 02:18:51.305756] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.422 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.422 [2024-05-15 02:18:51.322457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.422 [2024-05-15 02:18:51.322490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.422 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.422 [2024-05-15 02:18:51.337986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.422 [2024-05-15 02:18:51.338020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.422 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.422 [2024-05-15 02:18:51.348220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.422 [2024-05-15 02:18:51.348254] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.422 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.422 [2024-05-15 02:18:51.359960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.422 [2024-05-15 02:18:51.359993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.422 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.422 [2024-05-15 02:18:51.370934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.422 [2024-05-15 02:18:51.370969] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.422 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.422 [2024-05-15 02:18:51.381992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.422 [2024-05-15 02:18:51.382031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.422 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.422 [2024-05-15 02:18:51.393731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.422 [2024-05-15 02:18:51.393764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.422 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.422 [2024-05-15 02:18:51.410526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.422 [2024-05-15 02:18:51.410564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.422 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.422 [2024-05-15 02:18:51.427985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.422 [2024-05-15 02:18:51.428021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.422 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.681 [2024-05-15 02:18:51.438578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.681 [2024-05-15 02:18:51.438610] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.681 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.681 [2024-05-15 02:18:51.452773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.681 [2024-05-15 02:18:51.452807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.681 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.681 [2024-05-15 02:18:51.471638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.681 [2024-05-15 02:18:51.471673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.681 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.681 [2024-05-15 02:18:51.485964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.681 [2024-05-15 02:18:51.485997] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.681 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.681 [2024-05-15 02:18:51.502574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.681 [2024-05-15 02:18:51.502608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.681 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.681 [2024-05-15 02:18:51.520695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.681 [2024-05-15 02:18:51.520732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.681 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.681 [2024-05-15 02:18:51.536101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.681 [2024-05-15 02:18:51.536136] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.681 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.681 [2024-05-15 02:18:51.553893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.681 [2024-05-15 02:18:51.553929] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.682 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.682 [2024-05-15 02:18:51.569520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.682 [2024-05-15 02:18:51.569554] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.682 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.682 [2024-05-15 02:18:51.586507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.682 [2024-05-15 02:18:51.586543] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.682 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.682 [2024-05-15 02:18:51.601579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.682 [2024-05-15 02:18:51.601615] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.682 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.682 [2024-05-15 02:18:51.617822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.682 [2024-05-15 02:18:51.617857] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.682 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.682 [2024-05-15 02:18:51.635156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.682 [2024-05-15 02:18:51.635193] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.682 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.682 [2024-05-15 02:18:51.651000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.682 [2024-05-15 02:18:51.651034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.682 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.682 [2024-05-15 02:18:51.667595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.682 [2024-05-15 02:18:51.667629] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.682 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.682 [2024-05-15 02:18:51.683150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.682 [2024-05-15 02:18:51.683185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.682 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.941 [2024-05-15 02:18:51.699858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.941 [2024-05-15 02:18:51.699893] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.941 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.941 [2024-05-15 02:18:51.716474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.941 [2024-05-15 02:18:51.716510] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.941 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.941 [2024-05-15 02:18:51.733070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.941 [2024-05-15 02:18:51.733104] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.941 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.941 [2024-05-15 02:18:51.748672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.941 [2024-05-15 02:18:51.748708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.941 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.941 [2024-05-15 02:18:51.766482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.941 [2024-05-15 02:18:51.766516] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.941 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.941 [2024-05-15 02:18:51.782193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.941 [2024-05-15 02:18:51.782230] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.941 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.941 [2024-05-15 02:18:51.798873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.941 [2024-05-15 02:18:51.798909] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.941 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.941 [2024-05-15 02:18:51.816063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.941 [2024-05-15 02:18:51.816100] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.941 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.941 [2024-05-15 02:18:51.830766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.941 [2024-05-15 02:18:51.830800] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.941 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.941 [2024-05-15 02:18:51.847655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.941 [2024-05-15 02:18:51.847690] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.941 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.941 [2024-05-15 02:18:51.863411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.941 [2024-05-15 02:18:51.863444] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.941 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.941 [2024-05-15 02:18:51.879217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.941 [2024-05-15 02:18:51.879252] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.941 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.941 [2024-05-15 02:18:51.896769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.941 [2024-05-15 02:18:51.896806] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.941 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.941 [2024-05-15 02:18:51.912206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.942 [2024-05-15 02:18:51.912240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.942 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.942 [2024-05-15 02:18:51.928509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.942 [2024-05-15 02:18:51.928542] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.942 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.942 [2024-05-15 02:18:51.938288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.942 [2024-05-15 02:18:51.938320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.942 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:03.942 [2024-05-15 02:18:51.952818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.942 [2024-05-15 02:18:51.952851] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.201 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.201 [2024-05-15 02:18:51.968763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.201 [2024-05-15 02:18:51.968798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.201 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.201 [2024-05-15 02:18:51.979110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.201 [2024-05-15 02:18:51.979147] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.201 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.201 [2024-05-15 02:18:51.994166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.201 [2024-05-15 02:18:51.994202] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.201 2024/05/15 02:18:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.201 [2024-05-15 02:18:52.011860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.201 [2024-05-15 02:18:52.011895] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.201 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.201 [2024-05-15 02:18:52.027138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.201 [2024-05-15 02:18:52.027172] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.201 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.201 [2024-05-15 02:18:52.036579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.201 [2024-05-15 02:18:52.036612] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.201 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.201 [2024-05-15 02:18:52.051966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.201 [2024-05-15 02:18:52.052000] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.201 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.201 [2024-05-15 02:18:52.068723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.201 [2024-05-15 02:18:52.068757] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.201 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.201 [2024-05-15 02:18:52.085350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.201 [2024-05-15 02:18:52.085396] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.201 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.201 [2024-05-15 02:18:52.095566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.201 [2024-05-15 02:18:52.095600] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.201 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.202 [2024-05-15 02:18:52.110639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.202 [2024-05-15 02:18:52.110673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.202 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.202 [2024-05-15 02:18:52.127497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.202 [2024-05-15 02:18:52.127530] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.202 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.202 [2024-05-15 02:18:52.142967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.202 [2024-05-15 02:18:52.143002] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.202 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.202 [2024-05-15 02:18:52.158396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.202 [2024-05-15 02:18:52.158428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.202 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.202 [2024-05-15 02:18:52.173768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.202 [2024-05-15 02:18:52.173801] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.202 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.202 [2024-05-15 02:18:52.184095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.202 [2024-05-15 02:18:52.184128] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.202 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.202 [2024-05-15 02:18:52.194900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.202 [2024-05-15 02:18:52.194933] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.202 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.202 [2024-05-15 02:18:52.203020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.202 [2024-05-15 02:18:52.203052] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.202 00:18:04.202 Latency(us) 00:18:04.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.202 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:04.202 Nvme1n1 : 5.01 11624.84 90.82 0.00 0.00 10996.43 4944.99 26095.24 00:18:04.202 =================================================================================================================== 00:18:04.202 Total : 11624.84 90.82 0.00 0.00 10996.43 4944.99 26095.24 00:18:04.202 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.202 [2024-05-15 02:18:52.215022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.202 [2024-05-15 02:18:52.215054] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 [2024-05-15 02:18:52.223002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.461 [2024-05-15 02:18:52.223031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 [2024-05-15 02:18:52.235054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.461 [2024-05-15 02:18:52.235096] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 [2024-05-15 02:18:52.247069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.461 [2024-05-15 02:18:52.247112] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 [2024-05-15 02:18:52.259065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.461 [2024-05-15 02:18:52.259108] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 [2024-05-15 02:18:52.271060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.461 [2024-05-15 02:18:52.271102] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 [2024-05-15 02:18:52.283056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.461 [2024-05-15 02:18:52.283092] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 [2024-05-15 02:18:52.295043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.461 [2024-05-15 02:18:52.295071] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 [2024-05-15 02:18:52.307049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.461 [2024-05-15 02:18:52.307079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 [2024-05-15 02:18:52.319073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.461 [2024-05-15 02:18:52.319115] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 [2024-05-15 02:18:52.331064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.461 [2024-05-15 02:18:52.331097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 [2024-05-15 02:18:52.343058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.461 [2024-05-15 02:18:52.343089] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 [2024-05-15 02:18:52.355104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.461 [2024-05-15 02:18:52.355142] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 [2024-05-15 02:18:52.367162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.461 [2024-05-15 02:18:52.367229] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 [2024-05-15 02:18:52.379083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.461 [2024-05-15 02:18:52.379119] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 [2024-05-15 02:18:52.391081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.461 [2024-05-15 02:18:52.391116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.461 2024/05/15 02:18:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.461 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (73574) - No such process 00:18:04.461 02:18:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 73574 00:18:04.461 02:18:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:04.461 02:18:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.461 02:18:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:04.461 02:18:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.461 02:18:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:04.461 02:18:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.461 02:18:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:04.461 delay0 00:18:04.462 02:18:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.462 02:18:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:04.462 02:18:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.462 02:18:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:04.462 02:18:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.462 02:18:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:04.720 [2024-05-15 02:18:52.579573] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:11.280 Initializing NVMe Controllers 00:18:11.280 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:11.280 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:11.280 Initialization complete. Launching workers. 00:18:11.280 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 58 00:18:11.280 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 345, failed to submit 33 00:18:11.280 success 159, unsuccess 186, failed 0 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:11.280 rmmod nvme_tcp 00:18:11.280 rmmod nvme_fabrics 00:18:11.280 rmmod nvme_keyring 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 73474 ']' 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 73474 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 73474 ']' 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 73474 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73474 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:11.280 killing process with pid 73474 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73474' 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 73474 00:18:11.280 [2024-05-15 02:18:58.732337] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 73474 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:11.280 00:18:11.280 real 0m24.449s 00:18:11.280 user 0m39.975s 00:18:11.280 sys 0m6.170s 00:18:11.280 ************************************ 00:18:11.280 END TEST nvmf_zcopy 00:18:11.280 ************************************ 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:11.280 02:18:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.280 02:18:59 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:11.280 02:18:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:11.280 02:18:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:11.280 02:18:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:11.280 ************************************ 00:18:11.280 START TEST nvmf_nmic 00:18:11.280 ************************************ 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:11.280 * Looking for test storage... 00:18:11.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.280 02:18:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:11.281 Cannot find device "nvmf_tgt_br" 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:11.281 Cannot find device "nvmf_tgt_br2" 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:11.281 Cannot find device "nvmf_tgt_br" 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:11.281 Cannot find device "nvmf_tgt_br2" 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:11.281 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:11.281 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:11.281 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:11.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:18:11.539 00:18:11.539 --- 10.0.0.2 ping statistics --- 00:18:11.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.539 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:11.539 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:11.539 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:18:11.539 00:18:11.539 --- 10.0.0.3 ping statistics --- 00:18:11.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.539 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:11.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:11.539 00:18:11.539 --- 10.0.0.1 ping statistics --- 00:18:11.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.539 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=73822 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 73822 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 73822 ']' 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:11.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:11.539 02:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:11.539 [2024-05-15 02:18:59.550489] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:11.539 [2024-05-15 02:18:59.550590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.798 [2024-05-15 02:18:59.691330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:11.798 [2024-05-15 02:18:59.767150] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.798 [2024-05-15 02:18:59.767210] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.798 [2024-05-15 02:18:59.767223] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.798 [2024-05-15 02:18:59.767233] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.798 [2024-05-15 02:18:59.767243] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.798 [2024-05-15 02:18:59.767431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.798 [2024-05-15 02:18:59.767947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.798 [2024-05-15 02:18:59.768044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.798 [2024-05-15 02:18:59.768048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:12.734 [2024-05-15 02:19:00.505813] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:12.734 Malloc0 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:12.734 [2024-05-15 02:19:00.571129] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:12.734 [2024-05-15 02:19:00.571548] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.734 test case1: single bdev can't be used in multiple subsystems 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:12.734 [2024-05-15 02:19:00.595153] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:12.734 [2024-05-15 02:19:00.595196] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:12.734 [2024-05-15 02:19:00.595209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.734 2024/05/15 02:19:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:12.734 request: 00:18:12.734 { 00:18:12.734 "method": "nvmf_subsystem_add_ns", 00:18:12.734 "params": { 00:18:12.734 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:12.734 "namespace": { 00:18:12.734 "bdev_name": "Malloc0", 00:18:12.734 "no_auto_visible": false 00:18:12.734 } 00:18:12.734 } 00:18:12.734 } 00:18:12.734 Got JSON-RPC error response 00:18:12.734 GoRPCClient: error on JSON-RPC call 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:12.734 Adding namespace failed - expected result. 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:12.734 test case2: host connect to nvmf target in multiple paths 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:12.734 [2024-05-15 02:19:00.607320] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.734 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:12.994 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:12.994 02:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:12.994 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:12.994 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.994 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:12.994 02:19:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:15.525 02:19:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:15.525 02:19:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:15.525 02:19:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:15.525 02:19:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:15.525 02:19:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:15.525 02:19:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:15.525 02:19:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:15.525 [global] 00:18:15.525 thread=1 00:18:15.525 invalidate=1 00:18:15.525 rw=write 00:18:15.525 time_based=1 00:18:15.525 runtime=1 00:18:15.525 ioengine=libaio 00:18:15.525 direct=1 00:18:15.525 bs=4096 00:18:15.525 iodepth=1 00:18:15.525 norandommap=0 00:18:15.525 numjobs=1 00:18:15.525 00:18:15.525 verify_dump=1 00:18:15.525 verify_backlog=512 00:18:15.525 verify_state_save=0 00:18:15.525 do_verify=1 00:18:15.525 verify=crc32c-intel 00:18:15.525 [job0] 00:18:15.525 filename=/dev/nvme0n1 00:18:15.525 Could not set queue depth (nvme0n1) 00:18:15.525 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:15.525 fio-3.35 00:18:15.525 Starting 1 thread 00:18:16.460 00:18:16.460 job0: (groupid=0, jobs=1): err= 0: pid=73908: Wed May 15 02:19:04 2024 00:18:16.460 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:18:16.460 slat (nsec): min=13070, max=43865, avg=16074.70, stdev=2981.13 00:18:16.460 clat (usec): min=138, max=275, avg=157.92, stdev=12.60 00:18:16.460 lat (usec): min=152, max=290, avg=173.99, stdev=12.98 00:18:16.460 clat percentiles (usec): 00:18:16.460 | 1.00th=[ 143], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 149], 00:18:16.460 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:18:16.460 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 178], 00:18:16.460 | 99.00th=[ 210], 99.50th=[ 227], 99.90th=[ 243], 99.95th=[ 255], 00:18:16.460 | 99.99th=[ 277] 00:18:16.460 write: IOPS=3292, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1001msec); 0 zone resets 00:18:16.460 slat (usec): min=19, max=172, avg=24.03, stdev= 5.89 00:18:16.460 clat (usec): min=95, max=289, avg=113.78, stdev=12.78 00:18:16.460 lat (usec): min=115, max=421, avg=137.81, stdev=15.23 00:18:16.460 clat percentiles (usec): 00:18:16.460 | 1.00th=[ 99], 5.00th=[ 101], 10.00th=[ 103], 20.00th=[ 105], 00:18:16.460 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 111], 60.00th=[ 113], 00:18:16.460 | 70.00th=[ 116], 80.00th=[ 121], 90.00th=[ 128], 95.00th=[ 139], 00:18:16.460 | 99.00th=[ 163], 99.50th=[ 172], 99.90th=[ 182], 99.95th=[ 249], 00:18:16.460 | 99.99th=[ 289] 00:18:16.460 bw ( KiB/s): min=12488, max=12488, per=94.82%, avg=12488.00, stdev= 0.00, samples=1 00:18:16.460 iops : min= 3122, max= 3122, avg=3122.00, stdev= 0.00, samples=1 00:18:16.460 lat (usec) : 100=1.68%, 250=98.26%, 500=0.06% 00:18:16.460 cpu : usr=2.20%, sys=9.80%, ctx=6368, majf=0, minf=2 00:18:16.460 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.460 issued rwts: total=3072,3296,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.460 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.460 00:18:16.460 Run status group 0 (all jobs): 00:18:16.460 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:18:16.460 WRITE: bw=12.9MiB/s (13.5MB/s), 12.9MiB/s-12.9MiB/s (13.5MB/s-13.5MB/s), io=12.9MiB (13.5MB), run=1001-1001msec 00:18:16.460 00:18:16.460 Disk stats (read/write): 00:18:16.460 nvme0n1: ios=2706/3072, merge=0/0, ticks=451/387, in_queue=838, util=91.07% 00:18:16.460 02:19:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:16.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:16.460 02:19:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:16.460 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:16.460 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:16.460 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.460 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:16.460 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.460 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:16.460 02:19:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:16.460 02:19:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:16.461 rmmod nvme_tcp 00:18:16.461 rmmod nvme_fabrics 00:18:16.461 rmmod nvme_keyring 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 73822 ']' 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 73822 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 73822 ']' 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 73822 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73822 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:16.461 killing process with pid 73822 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73822' 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 73822 00:18:16.461 [2024-05-15 02:19:04.449743] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:16.461 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 73822 00:18:16.721 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:16.721 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:16.721 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:16.721 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.721 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:16.721 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.721 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.721 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.721 02:19:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:16.721 00:18:16.721 real 0m5.677s 00:18:16.721 user 0m19.013s 00:18:16.721 sys 0m1.322s 00:18:16.721 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:16.721 ************************************ 00:18:16.721 END TEST nvmf_nmic 00:18:16.721 ************************************ 00:18:16.721 02:19:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:16.721 02:19:04 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:16.721 02:19:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:16.721 02:19:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:16.721 02:19:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:16.981 ************************************ 00:18:16.981 START TEST nvmf_fio_target 00:18:16.981 ************************************ 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:16.981 * Looking for test storage... 00:18:16.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:16.981 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:16.982 Cannot find device "nvmf_tgt_br" 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.982 Cannot find device "nvmf_tgt_br2" 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:16.982 Cannot find device "nvmf_tgt_br" 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:16.982 Cannot find device "nvmf_tgt_br2" 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:16.982 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:17.241 02:19:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:17.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:18:17.241 00:18:17.241 --- 10.0.0.2 ping statistics --- 00:18:17.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.241 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:17.241 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:17.241 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:18:17.241 00:18:17.241 --- 10.0.0.3 ping statistics --- 00:18:17.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.241 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:17.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:18:17.241 00:18:17.241 --- 10.0.0.1 ping statistics --- 00:18:17.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.241 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.241 02:19:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:17.242 02:19:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.242 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=74073 00:18:17.242 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 74073 00:18:17.242 02:19:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:17.242 02:19:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 74073 ']' 00:18:17.242 02:19:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.242 02:19:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:17.242 02:19:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.242 02:19:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:17.242 02:19:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.242 [2024-05-15 02:19:05.250519] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:17.242 [2024-05-15 02:19:05.251030] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.500 [2024-05-15 02:19:05.390583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.500 [2024-05-15 02:19:05.460915] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.500 [2024-05-15 02:19:05.460990] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.500 [2024-05-15 02:19:05.461006] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.500 [2024-05-15 02:19:05.461017] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.500 [2024-05-15 02:19:05.461026] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.500 [2024-05-15 02:19:05.461154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.500 [2024-05-15 02:19:05.461776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.500 [2024-05-15 02:19:05.461859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.500 [2024-05-15 02:19:05.461867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.434 02:19:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:18.434 02:19:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:18.434 02:19:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.434 02:19:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:18.434 02:19:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.434 02:19:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.434 02:19:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:18.434 [2024-05-15 02:19:06.440766] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.693 02:19:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:18.951 02:19:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:18.951 02:19:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:19.210 02:19:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:19.210 02:19:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:19.469 02:19:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:19.469 02:19:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:19.727 02:19:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:19.727 02:19:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:19.985 02:19:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:20.244 02:19:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:20.244 02:19:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:20.513 02:19:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:20.513 02:19:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:20.799 02:19:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:20.799 02:19:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:21.063 02:19:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:21.326 02:19:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:21.326 02:19:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:21.585 02:19:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:21.585 02:19:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:21.843 02:19:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.101 [2024-05-15 02:19:09.904650] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:22.101 [2024-05-15 02:19:09.904933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.101 02:19:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:22.359 02:19:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:22.618 02:19:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:22.877 02:19:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:22.877 02:19:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:22.877 02:19:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:22.877 02:19:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:22.877 02:19:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:22.877 02:19:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:24.791 02:19:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:24.791 02:19:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:24.791 02:19:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:24.791 02:19:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:24.791 02:19:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:24.791 02:19:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:24.791 02:19:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:24.791 [global] 00:18:24.791 thread=1 00:18:24.791 invalidate=1 00:18:24.791 rw=write 00:18:24.791 time_based=1 00:18:24.791 runtime=1 00:18:24.791 ioengine=libaio 00:18:24.791 direct=1 00:18:24.791 bs=4096 00:18:24.791 iodepth=1 00:18:24.791 norandommap=0 00:18:24.791 numjobs=1 00:18:24.791 00:18:24.791 verify_dump=1 00:18:24.791 verify_backlog=512 00:18:24.791 verify_state_save=0 00:18:24.791 do_verify=1 00:18:24.791 verify=crc32c-intel 00:18:24.791 [job0] 00:18:24.791 filename=/dev/nvme0n1 00:18:24.791 [job1] 00:18:24.791 filename=/dev/nvme0n2 00:18:24.791 [job2] 00:18:24.791 filename=/dev/nvme0n3 00:18:24.791 [job3] 00:18:24.791 filename=/dev/nvme0n4 00:18:24.791 Could not set queue depth (nvme0n1) 00:18:24.791 Could not set queue depth (nvme0n2) 00:18:24.791 Could not set queue depth (nvme0n3) 00:18:24.791 Could not set queue depth (nvme0n4) 00:18:25.050 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:25.050 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:25.050 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:25.050 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:25.050 fio-3.35 00:18:25.050 Starting 4 threads 00:18:26.445 00:18:26.445 job0: (groupid=0, jobs=1): err= 0: pid=74325: Wed May 15 02:19:14 2024 00:18:26.445 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:18:26.445 slat (nsec): min=13029, max=49938, avg=17326.32, stdev=3475.16 00:18:26.445 clat (usec): min=153, max=1362, avg=242.51, stdev=39.62 00:18:26.445 lat (usec): min=169, max=1378, avg=259.84, stdev=40.11 00:18:26.445 clat percentiles (usec): 00:18:26.445 | 1.00th=[ 163], 5.00th=[ 186], 10.00th=[ 219], 20.00th=[ 225], 00:18:26.445 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:18:26.445 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 285], 00:18:26.445 | 99.00th=[ 343], 99.50th=[ 367], 99.90th=[ 478], 99.95th=[ 586], 00:18:26.445 | 99.99th=[ 1369] 00:18:26.445 write: IOPS=2204, BW=8819KiB/s (9031kB/s)(8828KiB/1001msec); 0 zone resets 00:18:26.445 slat (usec): min=19, max=133, avg=26.55, stdev= 6.60 00:18:26.445 clat (usec): min=97, max=504, avg=181.61, stdev=41.51 00:18:26.445 lat (usec): min=118, max=540, avg=208.16, stdev=41.92 00:18:26.445 clat percentiles (usec): 00:18:26.445 | 1.00th=[ 102], 5.00th=[ 108], 10.00th=[ 113], 20.00th=[ 127], 00:18:26.445 | 30.00th=[ 180], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:18:26.445 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[ 227], 00:18:26.445 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 392], 99.95th=[ 433], 00:18:26.445 | 99.99th=[ 506] 00:18:26.445 bw ( KiB/s): min= 8416, max= 8416, per=25.25%, avg=8416.00, stdev= 0.00, samples=1 00:18:26.445 iops : min= 2104, max= 2104, avg=2104.00, stdev= 0.00, samples=1 00:18:26.445 lat (usec) : 100=0.26%, 250=83.29%, 500=16.38%, 750=0.05% 00:18:26.445 lat (msec) : 2=0.02% 00:18:26.445 cpu : usr=1.60%, sys=7.10%, ctx=4256, majf=0, minf=9 00:18:26.445 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:26.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.445 issued rwts: total=2048,2207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.445 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:26.445 job1: (groupid=0, jobs=1): err= 0: pid=74326: Wed May 15 02:19:14 2024 00:18:26.445 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:26.445 slat (nsec): min=14392, max=63882, avg=24468.53, stdev=9832.20 00:18:26.445 clat (usec): min=211, max=2201, avg=300.36, stdev=73.08 00:18:26.445 lat (usec): min=228, max=2222, avg=324.83, stdev=75.36 00:18:26.445 clat percentiles (usec): 00:18:26.445 | 1.00th=[ 223], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 251], 00:18:26.445 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 322], 00:18:26.445 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 392], 00:18:26.445 | 99.00th=[ 437], 99.50th=[ 461], 99.90th=[ 807], 99.95th=[ 2212], 00:18:26.445 | 99.99th=[ 2212] 00:18:26.445 write: IOPS=2035, BW=8144KiB/s (8339kB/s)(8152KiB/1001msec); 0 zone resets 00:18:26.445 slat (usec): min=19, max=107, avg=29.12, stdev= 9.06 00:18:26.445 clat (usec): min=104, max=658, avg=212.51, stdev=29.01 00:18:26.445 lat (usec): min=129, max=683, avg=241.63, stdev=28.70 00:18:26.445 clat percentiles (usec): 00:18:26.445 | 1.00th=[ 157], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 196], 00:18:26.445 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:18:26.445 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 245], 00:18:26.445 | 99.00th=[ 302], 99.50th=[ 343], 99.90th=[ 553], 99.95th=[ 578], 00:18:26.445 | 99.99th=[ 660] 00:18:26.445 bw ( KiB/s): min= 8192, max= 8192, per=24.58%, avg=8192.00, stdev= 0.00, samples=1 00:18:26.445 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:26.445 lat (usec) : 250=63.04%, 500=36.74%, 750=0.17%, 1000=0.03% 00:18:26.445 lat (msec) : 4=0.03% 00:18:26.445 cpu : usr=1.80%, sys=7.20%, ctx=3574, majf=0, minf=7 00:18:26.445 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:26.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.445 issued rwts: total=1536,2038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.445 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:26.445 job2: (groupid=0, jobs=1): err= 0: pid=74327: Wed May 15 02:19:14 2024 00:18:26.445 read: IOPS=1574, BW=6298KiB/s (6449kB/s)(6304KiB/1001msec) 00:18:26.445 slat (nsec): min=14179, max=70144, avg=21183.13, stdev=5268.30 00:18:26.445 clat (usec): min=156, max=2394, avg=285.34, stdev=56.11 00:18:26.445 lat (usec): min=182, max=2418, avg=306.52, stdev=56.14 00:18:26.446 clat percentiles (usec): 00:18:26.446 | 1.00th=[ 251], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:18:26.446 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:18:26.446 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:18:26.446 | 99.00th=[ 326], 99.50th=[ 388], 99.90th=[ 429], 99.95th=[ 2409], 00:18:26.446 | 99.99th=[ 2409] 00:18:26.446 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:26.446 slat (nsec): min=19959, max=71920, avg=30550.82, stdev=6220.60 00:18:26.446 clat (usec): min=128, max=403, avg=217.74, stdev=22.68 00:18:26.446 lat (usec): min=171, max=431, avg=248.30, stdev=22.23 00:18:26.446 clat percentiles (usec): 00:18:26.446 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:18:26.446 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 219], 00:18:26.446 | 70.00th=[ 223], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 249], 00:18:26.446 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 383], 99.95th=[ 400], 00:18:26.446 | 99.99th=[ 404] 00:18:26.446 bw ( KiB/s): min= 8192, max= 8192, per=24.58%, avg=8192.00, stdev= 0.00, samples=1 00:18:26.446 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:26.446 lat (usec) : 250=54.19%, 500=45.78% 00:18:26.446 lat (msec) : 4=0.03% 00:18:26.446 cpu : usr=1.30%, sys=7.70%, ctx=3638, majf=0, minf=12 00:18:26.446 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:26.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.446 issued rwts: total=1576,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.446 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:26.446 job3: (groupid=0, jobs=1): err= 0: pid=74328: Wed May 15 02:19:14 2024 00:18:26.446 read: IOPS=1587, BW=6350KiB/s (6502kB/s)(6356KiB/1001msec) 00:18:26.446 slat (nsec): min=14985, max=90147, avg=23841.26, stdev=5958.68 00:18:26.446 clat (usec): min=153, max=1416, avg=280.51, stdev=38.97 00:18:26.446 lat (usec): min=183, max=1436, avg=304.35, stdev=38.51 00:18:26.446 clat percentiles (usec): 00:18:26.446 | 1.00th=[ 217], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 265], 00:18:26.446 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 277], 60.00th=[ 285], 00:18:26.446 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:18:26.446 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 873], 99.95th=[ 1418], 00:18:26.446 | 99.99th=[ 1418] 00:18:26.446 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:26.446 slat (usec): min=21, max=143, avg=32.82, stdev= 8.17 00:18:26.446 clat (usec): min=120, max=494, avg=214.97, stdev=22.38 00:18:26.446 lat (usec): min=163, max=523, avg=247.79, stdev=22.32 00:18:26.446 clat percentiles (usec): 00:18:26.446 | 1.00th=[ 176], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:18:26.446 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:18:26.446 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 247], 00:18:26.446 | 99.00th=[ 310], 99.50th=[ 330], 99.90th=[ 355], 99.95th=[ 375], 00:18:26.446 | 99.99th=[ 494] 00:18:26.446 bw ( KiB/s): min= 8192, max= 8192, per=24.58%, avg=8192.00, stdev= 0.00, samples=1 00:18:26.446 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:26.446 lat (usec) : 250=55.13%, 500=44.79%, 750=0.03%, 1000=0.03% 00:18:26.446 lat (msec) : 2=0.03% 00:18:26.446 cpu : usr=2.20%, sys=7.60%, ctx=3637, majf=0, minf=7 00:18:26.446 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:26.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.446 issued rwts: total=1589,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.446 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:26.446 00:18:26.446 Run status group 0 (all jobs): 00:18:26.446 READ: bw=26.3MiB/s (27.6MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=26.4MiB (27.6MB), run=1001-1001msec 00:18:26.446 WRITE: bw=32.5MiB/s (34.1MB/s), 8144KiB/s-8819KiB/s (8339kB/s-9031kB/s), io=32.6MiB (34.2MB), run=1001-1001msec 00:18:26.446 00:18:26.446 Disk stats (read/write): 00:18:26.446 nvme0n1: ios=1673/2048, merge=0/0, ticks=415/398, in_queue=813, util=86.61% 00:18:26.446 nvme0n2: ios=1460/1536, merge=0/0, ticks=453/340, in_queue=793, util=87.44% 00:18:26.446 nvme0n3: ios=1571/1536, merge=0/0, ticks=530/360, in_queue=890, util=91.76% 00:18:26.446 nvme0n4: ios=1583/1536, merge=0/0, ticks=527/339, in_queue=866, util=91.72% 00:18:26.446 02:19:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:26.446 [global] 00:18:26.446 thread=1 00:18:26.446 invalidate=1 00:18:26.446 rw=randwrite 00:18:26.446 time_based=1 00:18:26.446 runtime=1 00:18:26.446 ioengine=libaio 00:18:26.446 direct=1 00:18:26.446 bs=4096 00:18:26.446 iodepth=1 00:18:26.446 norandommap=0 00:18:26.446 numjobs=1 00:18:26.446 00:18:26.446 verify_dump=1 00:18:26.446 verify_backlog=512 00:18:26.446 verify_state_save=0 00:18:26.446 do_verify=1 00:18:26.446 verify=crc32c-intel 00:18:26.446 [job0] 00:18:26.446 filename=/dev/nvme0n1 00:18:26.446 [job1] 00:18:26.446 filename=/dev/nvme0n2 00:18:26.446 [job2] 00:18:26.446 filename=/dev/nvme0n3 00:18:26.446 [job3] 00:18:26.446 filename=/dev/nvme0n4 00:18:26.446 Could not set queue depth (nvme0n1) 00:18:26.446 Could not set queue depth (nvme0n2) 00:18:26.446 Could not set queue depth (nvme0n3) 00:18:26.446 Could not set queue depth (nvme0n4) 00:18:26.446 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:26.446 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:26.446 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:26.446 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:26.446 fio-3.35 00:18:26.446 Starting 4 threads 00:18:27.819 00:18:27.819 job0: (groupid=0, jobs=1): err= 0: pid=74375: Wed May 15 02:19:15 2024 00:18:27.819 read: IOPS=2929, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1001msec) 00:18:27.819 slat (nsec): min=12554, max=39616, avg=16736.22, stdev=3996.82 00:18:27.819 clat (usec): min=134, max=2051, avg=163.58, stdev=42.78 00:18:27.819 lat (usec): min=148, max=2067, avg=180.32, stdev=43.16 00:18:27.819 clat percentiles (usec): 00:18:27.819 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:18:27.819 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:18:27.819 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 184], 00:18:27.819 | 99.00th=[ 237], 99.50th=[ 293], 99.90th=[ 529], 99.95th=[ 996], 00:18:27.819 | 99.99th=[ 2057] 00:18:27.819 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:18:27.819 slat (usec): min=18, max=119, avg=25.64, stdev= 6.75 00:18:27.819 clat (usec): min=93, max=296, avg=124.10, stdev=13.39 00:18:27.819 lat (usec): min=114, max=399, avg=149.74, stdev=17.10 00:18:27.819 clat percentiles (usec): 00:18:27.819 | 1.00th=[ 100], 5.00th=[ 105], 10.00th=[ 110], 20.00th=[ 115], 00:18:27.819 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 126], 00:18:27.819 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 145], 00:18:27.819 | 99.00th=[ 163], 99.50th=[ 178], 99.90th=[ 231], 99.95th=[ 281], 00:18:27.819 | 99.99th=[ 297] 00:18:27.819 bw ( KiB/s): min=12288, max=12288, per=30.24%, avg=12288.00, stdev= 0.00, samples=1 00:18:27.819 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:18:27.819 lat (usec) : 100=0.53%, 250=99.05%, 500=0.37%, 750=0.02%, 1000=0.02% 00:18:27.819 lat (msec) : 4=0.02% 00:18:27.819 cpu : usr=2.50%, sys=9.40%, ctx=6004, majf=0, minf=13 00:18:27.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:27.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.819 issued rwts: total=2932,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:27.819 job1: (groupid=0, jobs=1): err= 0: pid=74376: Wed May 15 02:19:15 2024 00:18:27.819 read: IOPS=2040, BW=8164KiB/s (8360kB/s)(8172KiB/1001msec) 00:18:27.819 slat (nsec): min=10672, max=65650, avg=17665.23, stdev=5559.66 00:18:27.819 clat (usec): min=137, max=41474, avg=276.98, stdev=917.35 00:18:27.819 lat (usec): min=151, max=41489, avg=294.65, stdev=917.46 00:18:27.819 clat percentiles (usec): 00:18:27.819 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:18:27.819 | 30.00th=[ 169], 40.00th=[ 182], 50.00th=[ 249], 60.00th=[ 277], 00:18:27.819 | 70.00th=[ 310], 80.00th=[ 351], 90.00th=[ 396], 95.00th=[ 433], 00:18:27.819 | 99.00th=[ 506], 99.50th=[ 537], 99.90th=[ 750], 99.95th=[ 1188], 00:18:27.819 | 99.99th=[41681] 00:18:27.819 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:27.819 slat (usec): min=11, max=190, avg=25.55, stdev= 9.46 00:18:27.819 clat (usec): min=103, max=491, avg=164.64, stdev=66.75 00:18:27.819 lat (usec): min=123, max=510, avg=190.19, stdev=68.66 00:18:27.819 clat percentiles (usec): 00:18:27.819 | 1.00th=[ 111], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 124], 00:18:27.819 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 135], 60.00th=[ 139], 00:18:27.819 | 70.00th=[ 151], 80.00th=[ 202], 90.00th=[ 277], 95.00th=[ 326], 00:18:27.819 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 429], 99.95th=[ 429], 00:18:27.819 | 99.99th=[ 494] 00:18:27.819 bw ( KiB/s): min=12288, max=12288, per=30.24%, avg=12288.00, stdev= 0.00, samples=1 00:18:27.819 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:18:27.819 lat (usec) : 250=68.52%, 500=30.90%, 750=0.54% 00:18:27.819 lat (msec) : 2=0.02%, 50=0.02% 00:18:27.819 cpu : usr=2.00%, sys=6.60%, ctx=4099, majf=0, minf=11 00:18:27.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:27.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.819 issued rwts: total=2043,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:27.819 job2: (groupid=0, jobs=1): err= 0: pid=74377: Wed May 15 02:19:15 2024 00:18:27.819 read: IOPS=1755, BW=7021KiB/s (7189kB/s)(7028KiB/1001msec) 00:18:27.819 slat (nsec): min=8757, max=60758, avg=19067.17, stdev=7206.73 00:18:27.819 clat (usec): min=153, max=41525, avg=301.83, stdev=1010.48 00:18:27.819 lat (usec): min=169, max=41534, avg=320.90, stdev=1010.44 00:18:27.819 clat percentiles (usec): 00:18:27.819 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 178], 00:18:27.819 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 227], 60.00th=[ 277], 00:18:27.819 | 70.00th=[ 318], 80.00th=[ 359], 90.00th=[ 433], 95.00th=[ 494], 00:18:27.819 | 99.00th=[ 578], 99.50th=[ 652], 99.90th=[ 7373], 99.95th=[41681], 00:18:27.819 | 99.99th=[41681] 00:18:27.819 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:27.819 slat (usec): min=11, max=104, avg=26.20, stdev=10.55 00:18:27.819 clat (usec): min=119, max=971, avg=182.78, stdev=57.46 00:18:27.819 lat (usec): min=141, max=989, avg=208.99, stdev=57.39 00:18:27.819 clat percentiles (usec): 00:18:27.819 | 1.00th=[ 126], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 139], 00:18:27.819 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 169], 00:18:27.819 | 70.00th=[ 198], 80.00th=[ 241], 90.00th=[ 269], 95.00th=[ 289], 00:18:27.819 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 412], 99.95th=[ 529], 00:18:27.819 | 99.99th=[ 971] 00:18:27.819 bw ( KiB/s): min=11064, max=11064, per=27.23%, avg=11064.00, stdev= 0.00, samples=1 00:18:27.819 iops : min= 2766, max= 2766, avg=2766.00, stdev= 0.00, samples=1 00:18:27.819 lat (usec) : 250=69.96%, 500=27.99%, 750=1.84%, 1000=0.03% 00:18:27.819 lat (msec) : 2=0.05%, 4=0.08%, 10=0.03%, 50=0.03% 00:18:27.819 cpu : usr=2.10%, sys=6.40%, ctx=3809, majf=0, minf=12 00:18:27.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:27.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.819 issued rwts: total=1757,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:27.819 job3: (groupid=0, jobs=1): err= 0: pid=74378: Wed May 15 02:19:15 2024 00:18:27.819 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:18:27.819 slat (nsec): min=13202, max=50572, avg=16862.92, stdev=4779.54 00:18:27.819 clat (usec): min=151, max=296, avg=178.09, stdev=17.78 00:18:27.819 lat (usec): min=166, max=311, avg=194.96, stdev=18.94 00:18:27.819 clat percentiles (usec): 00:18:27.819 | 1.00th=[ 157], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 165], 00:18:27.819 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:18:27.819 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 208], 00:18:27.819 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 273], 99.95th=[ 277], 00:18:27.819 | 99.99th=[ 297] 00:18:27.819 write: IOPS=2997, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:18:27.819 slat (usec): min=19, max=121, avg=23.70, stdev= 5.83 00:18:27.819 clat (usec): min=110, max=587, avg=139.91, stdev=16.11 00:18:27.819 lat (usec): min=131, max=607, avg=163.62, stdev=17.85 00:18:27.819 clat percentiles (usec): 00:18:27.819 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 130], 00:18:27.819 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:18:27.819 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 163], 00:18:27.819 | 99.00th=[ 190], 99.50th=[ 204], 99.90th=[ 229], 99.95th=[ 251], 00:18:27.819 | 99.99th=[ 586] 00:18:27.819 bw ( KiB/s): min=12288, max=12288, per=30.24%, avg=12288.00, stdev= 0.00, samples=1 00:18:27.819 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:18:27.819 lat (usec) : 250=99.28%, 500=0.70%, 750=0.02% 00:18:27.819 cpu : usr=1.40%, sys=9.20%, ctx=5560, majf=0, minf=9 00:18:27.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:27.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.819 issued rwts: total=2560,3000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:27.819 00:18:27.819 Run status group 0 (all jobs): 00:18:27.820 READ: bw=36.3MiB/s (38.0MB/s), 7021KiB/s-11.4MiB/s (7189kB/s-12.0MB/s), io=36.3MiB (38.1MB), run=1001-1001msec 00:18:27.820 WRITE: bw=39.7MiB/s (41.6MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.7MiB (41.6MB), run=1001-1001msec 00:18:27.820 00:18:27.820 Disk stats (read/write): 00:18:27.820 nvme0n1: ios=2610/2564, merge=0/0, ticks=463/340, in_queue=803, util=87.68% 00:18:27.820 nvme0n2: ios=1719/2048, merge=0/0, ticks=500/350, in_queue=850, util=89.30% 00:18:27.820 nvme0n3: ios=1553/1803, merge=0/0, ticks=481/335, in_queue=816, util=87.96% 00:18:27.820 nvme0n4: ios=2211/2560, merge=0/0, ticks=404/397, in_queue=801, util=89.80% 00:18:27.820 02:19:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:27.820 [global] 00:18:27.820 thread=1 00:18:27.820 invalidate=1 00:18:27.820 rw=write 00:18:27.820 time_based=1 00:18:27.820 runtime=1 00:18:27.820 ioengine=libaio 00:18:27.820 direct=1 00:18:27.820 bs=4096 00:18:27.820 iodepth=128 00:18:27.820 norandommap=0 00:18:27.820 numjobs=1 00:18:27.820 00:18:27.820 verify_dump=1 00:18:27.820 verify_backlog=512 00:18:27.820 verify_state_save=0 00:18:27.820 do_verify=1 00:18:27.820 verify=crc32c-intel 00:18:27.820 [job0] 00:18:27.820 filename=/dev/nvme0n1 00:18:27.820 [job1] 00:18:27.820 filename=/dev/nvme0n2 00:18:27.820 [job2] 00:18:27.820 filename=/dev/nvme0n3 00:18:27.820 [job3] 00:18:27.820 filename=/dev/nvme0n4 00:18:27.820 Could not set queue depth (nvme0n1) 00:18:27.820 Could not set queue depth (nvme0n2) 00:18:27.820 Could not set queue depth (nvme0n3) 00:18:27.820 Could not set queue depth (nvme0n4) 00:18:27.820 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:27.820 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:27.820 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:27.820 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:27.820 fio-3.35 00:18:27.820 Starting 4 threads 00:18:28.754 00:18:28.754 job0: (groupid=0, jobs=1): err= 0: pid=74426: Wed May 15 02:19:16 2024 00:18:28.754 read: IOPS=5543, BW=21.7MiB/s (22.7MB/s)(21.7MiB/1003msec) 00:18:28.754 slat (usec): min=6, max=5142, avg=90.22, stdev=412.63 00:18:28.754 clat (usec): min=1233, max=17959, avg=11473.21, stdev=1618.82 00:18:28.754 lat (usec): min=3222, max=17979, avg=11563.43, stdev=1650.33 00:18:28.754 clat percentiles (usec): 00:18:28.754 | 1.00th=[ 6718], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10814], 00:18:28.754 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:18:28.754 | 70.00th=[11731], 80.00th=[12649], 90.00th=[13435], 95.00th=[14353], 00:18:28.755 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16319], 99.95th=[17957], 00:18:28.755 | 99.99th=[17957] 00:18:28.755 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:18:28.755 slat (usec): min=9, max=4522, avg=80.91, stdev=305.30 00:18:28.755 clat (usec): min=6859, max=16317, avg=11203.54, stdev=1312.39 00:18:28.755 lat (usec): min=6888, max=16637, avg=11284.44, stdev=1331.29 00:18:28.755 clat percentiles (usec): 00:18:28.755 | 1.00th=[ 7570], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[10421], 00:18:28.755 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:18:28.755 | 70.00th=[11469], 80.00th=[11600], 90.00th=[12256], 95.00th=[13829], 00:18:28.755 | 99.00th=[15664], 99.50th=[15926], 99.90th=[16319], 99.95th=[16319], 00:18:28.755 | 99.99th=[16319] 00:18:28.755 bw ( KiB/s): min=21576, max=23527, per=34.15%, avg=22551.50, stdev=1379.57, samples=2 00:18:28.755 iops : min= 5394, max= 5881, avg=5637.50, stdev=344.36, samples=2 00:18:28.755 lat (msec) : 2=0.01%, 4=0.26%, 10=11.89%, 20=87.84% 00:18:28.755 cpu : usr=3.89%, sys=16.77%, ctx=800, majf=0, minf=7 00:18:28.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:28.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:28.755 issued rwts: total=5560,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:28.755 job1: (groupid=0, jobs=1): err= 0: pid=74427: Wed May 15 02:19:16 2024 00:18:28.755 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:18:28.755 slat (usec): min=6, max=5211, avg=92.50, stdev=433.86 00:18:28.755 clat (usec): min=9328, max=16279, avg=12298.43, stdev=865.44 00:18:28.755 lat (usec): min=9590, max=17196, avg=12390.93, stdev=778.36 00:18:28.755 clat percentiles (usec): 00:18:28.755 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[11207], 20.00th=[11994], 00:18:28.755 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:18:28.755 | 70.00th=[12649], 80.00th=[12780], 90.00th=[12911], 95.00th=[13304], 00:18:28.755 | 99.00th=[16057], 99.50th=[16188], 99.90th=[16319], 99.95th=[16319], 00:18:28.755 | 99.99th=[16319] 00:18:28.755 write: IOPS=5449, BW=21.3MiB/s (22.3MB/s)(21.3MiB/1002msec); 0 zone resets 00:18:28.755 slat (usec): min=10, max=2802, avg=88.86, stdev=351.70 00:18:28.755 clat (usec): min=1871, max=14873, avg=11661.38, stdev=1441.51 00:18:28.755 lat (usec): min=1889, max=14893, avg=11750.24, stdev=1440.06 00:18:28.755 clat percentiles (usec): 00:18:28.755 | 1.00th=[ 5407], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10552], 00:18:28.755 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:18:28.755 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13304], 95.00th=[13566], 00:18:28.755 | 99.00th=[14091], 99.50th=[14222], 99.90th=[14877], 99.95th=[14877], 00:18:28.755 | 99.99th=[14877] 00:18:28.755 bw ( KiB/s): min=20601, max=22104, per=32.33%, avg=21352.50, stdev=1062.78, samples=2 00:18:28.755 iops : min= 5150, max= 5526, avg=5338.00, stdev=265.87, samples=2 00:18:28.755 lat (msec) : 2=0.07%, 4=0.20%, 10=4.65%, 20=95.09% 00:18:28.755 cpu : usr=4.90%, sys=14.19%, ctx=585, majf=0, minf=14 00:18:28.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:28.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:28.755 issued rwts: total=5120,5460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:28.755 job2: (groupid=0, jobs=1): err= 0: pid=74428: Wed May 15 02:19:16 2024 00:18:28.755 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:18:28.755 slat (usec): min=5, max=9153, avg=209.20, stdev=1000.92 00:18:28.755 clat (usec): min=17115, max=41716, avg=27092.57, stdev=4190.62 00:18:28.755 lat (usec): min=19723, max=44250, avg=27301.77, stdev=4152.98 00:18:28.755 clat percentiles (usec): 00:18:28.755 | 1.00th=[19792], 5.00th=[22152], 10.00th=[22414], 20.00th=[23200], 00:18:28.755 | 30.00th=[24773], 40.00th=[25560], 50.00th=[25822], 60.00th=[27395], 00:18:28.755 | 70.00th=[28705], 80.00th=[30278], 90.00th=[33817], 95.00th=[34866], 00:18:28.755 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:28.755 | 99.99th=[41681] 00:18:28.755 write: IOPS=2525, BW=9.87MiB/s (10.3MB/s)(9.91MiB/1004msec); 0 zone resets 00:18:28.755 slat (usec): min=13, max=6093, avg=218.83, stdev=820.34 00:18:28.755 clat (usec): min=2733, max=40466, avg=28032.65, stdev=5057.77 00:18:28.755 lat (usec): min=6523, max=40494, avg=28251.48, stdev=5023.62 00:18:28.755 clat percentiles (usec): 00:18:28.755 | 1.00th=[11863], 5.00th=[21890], 10.00th=[24511], 20.00th=[24773], 00:18:28.755 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26608], 60.00th=[28181], 00:18:28.755 | 70.00th=[29492], 80.00th=[32375], 90.00th=[35914], 95.00th=[37487], 00:18:28.755 | 99.00th=[38011], 99.50th=[39584], 99.90th=[40633], 99.95th=[40633], 00:18:28.755 | 99.99th=[40633] 00:18:28.755 bw ( KiB/s): min= 9352, max= 9920, per=14.59%, avg=9636.00, stdev=401.64, samples=2 00:18:28.755 iops : min= 2338, max= 2480, avg=2409.00, stdev=100.41, samples=2 00:18:28.755 lat (msec) : 4=0.02%, 10=0.35%, 20=2.01%, 50=97.62% 00:18:28.755 cpu : usr=1.79%, sys=6.48%, ctx=317, majf=0, minf=7 00:18:28.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:18:28.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:28.755 issued rwts: total=2048,2536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:28.755 job3: (groupid=0, jobs=1): err= 0: pid=74429: Wed May 15 02:19:16 2024 00:18:28.755 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:18:28.755 slat (usec): min=6, max=7826, avg=184.46, stdev=917.43 00:18:28.755 clat (usec): min=14518, max=30615, avg=23545.56, stdev=2784.24 00:18:28.755 lat (usec): min=15761, max=31815, avg=23730.02, stdev=2687.87 00:18:28.755 clat percentiles (usec): 00:18:28.755 | 1.00th=[17433], 5.00th=[18744], 10.00th=[19792], 20.00th=[21103], 00:18:28.755 | 30.00th=[21890], 40.00th=[22676], 50.00th=[23462], 60.00th=[24249], 00:18:28.755 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27132], 95.00th=[27657], 00:18:28.755 | 99.00th=[30016], 99.50th=[30016], 99.90th=[30540], 99.95th=[30540], 00:18:28.755 | 99.99th=[30540] 00:18:28.755 write: IOPS=2938, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1003msec); 0 zone resets 00:18:28.755 slat (usec): min=13, max=5925, avg=170.36, stdev=726.42 00:18:28.755 clat (usec): min=2827, max=34084, avg=22357.95, stdev=4666.57 00:18:28.755 lat (usec): min=2863, max=34118, avg=22528.31, stdev=4638.77 00:18:28.755 clat percentiles (usec): 00:18:28.755 | 1.00th=[ 7570], 5.00th=[16581], 10.00th=[17957], 20.00th=[18482], 00:18:28.755 | 30.00th=[18744], 40.00th=[19792], 50.00th=[22152], 60.00th=[24511], 00:18:28.755 | 70.00th=[25035], 80.00th=[26084], 90.00th=[28967], 95.00th=[29754], 00:18:28.755 | 99.00th=[31589], 99.50th=[33162], 99.90th=[33817], 99.95th=[33817], 00:18:28.755 | 99.99th=[34341] 00:18:28.755 bw ( KiB/s): min=10272, max=12312, per=17.10%, avg=11292.00, stdev=1442.50, samples=2 00:18:28.755 iops : min= 2568, max= 3078, avg=2823.00, stdev=360.62, samples=2 00:18:28.755 lat (msec) : 4=0.05%, 10=0.58%, 20=26.84%, 50=72.53% 00:18:28.755 cpu : usr=2.79%, sys=9.38%, ctx=238, majf=0, minf=11 00:18:28.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:28.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:28.755 issued rwts: total=2560,2947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:28.755 00:18:28.755 Run status group 0 (all jobs): 00:18:28.755 READ: bw=59.5MiB/s (62.4MB/s), 8159KiB/s-21.7MiB/s (8355kB/s-22.7MB/s), io=59.7MiB (62.6MB), run=1002-1004msec 00:18:28.755 WRITE: bw=64.5MiB/s (67.6MB/s), 9.87MiB/s-21.9MiB/s (10.3MB/s-23.0MB/s), io=64.7MiB (67.9MB), run=1002-1004msec 00:18:28.755 00:18:28.755 Disk stats (read/write): 00:18:28.755 nvme0n1: ios=4658/5119, merge=0/0, ticks=25305/25117, in_queue=50422, util=88.68% 00:18:28.755 nvme0n2: ios=4591/4608, merge=0/0, ticks=12913/11835, in_queue=24748, util=93.94% 00:18:28.755 nvme0n3: ios=2068/2048, merge=0/0, ticks=13326/13549, in_queue=26875, util=93.75% 00:18:28.755 nvme0n4: ios=2169/2560, merge=0/0, ticks=12806/13164, in_queue=25970, util=92.89% 00:18:28.755 02:19:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:29.014 [global] 00:18:29.014 thread=1 00:18:29.014 invalidate=1 00:18:29.014 rw=randwrite 00:18:29.014 time_based=1 00:18:29.014 runtime=1 00:18:29.014 ioengine=libaio 00:18:29.014 direct=1 00:18:29.014 bs=4096 00:18:29.014 iodepth=128 00:18:29.014 norandommap=0 00:18:29.014 numjobs=1 00:18:29.014 00:18:29.014 verify_dump=1 00:18:29.014 verify_backlog=512 00:18:29.014 verify_state_save=0 00:18:29.014 do_verify=1 00:18:29.014 verify=crc32c-intel 00:18:29.014 [job0] 00:18:29.014 filename=/dev/nvme0n1 00:18:29.014 [job1] 00:18:29.014 filename=/dev/nvme0n2 00:18:29.014 [job2] 00:18:29.014 filename=/dev/nvme0n3 00:18:29.014 [job3] 00:18:29.014 filename=/dev/nvme0n4 00:18:29.014 Could not set queue depth (nvme0n1) 00:18:29.014 Could not set queue depth (nvme0n2) 00:18:29.014 Could not set queue depth (nvme0n3) 00:18:29.014 Could not set queue depth (nvme0n4) 00:18:29.014 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:29.014 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:29.014 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:29.014 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:29.014 fio-3.35 00:18:29.014 Starting 4 threads 00:18:30.389 00:18:30.389 job0: (groupid=0, jobs=1): err= 0: pid=74476: Wed May 15 02:19:18 2024 00:18:30.389 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:18:30.389 slat (usec): min=4, max=8236, avg=189.17, stdev=907.26 00:18:30.389 clat (usec): min=15697, max=39515, avg=23891.29, stdev=3469.98 00:18:30.389 lat (usec): min=18727, max=39533, avg=24080.45, stdev=3426.91 00:18:30.389 clat percentiles (usec): 00:18:30.389 | 1.00th=[17695], 5.00th=[20317], 10.00th=[20841], 20.00th=[21103], 00:18:30.389 | 30.00th=[22152], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:18:30.389 | 70.00th=[24773], 80.00th=[25822], 90.00th=[28705], 95.00th=[30802], 00:18:30.389 | 99.00th=[35390], 99.50th=[36439], 99.90th=[39060], 99.95th=[39584], 00:18:30.389 | 99.99th=[39584] 00:18:30.389 write: IOPS=2725, BW=10.6MiB/s (11.2MB/s)(10.7MiB/1004msec); 0 zone resets 00:18:30.389 slat (usec): min=12, max=5331, avg=180.42, stdev=632.03 00:18:30.389 clat (usec): min=1696, max=41661, avg=23857.16, stdev=5940.83 00:18:30.389 lat (usec): min=3863, max=41685, avg=24037.58, stdev=5943.78 00:18:30.389 clat percentiles (usec): 00:18:30.389 | 1.00th=[ 8717], 5.00th=[15926], 10.00th=[16581], 20.00th=[19268], 00:18:30.389 | 30.00th=[21627], 40.00th=[22152], 50.00th=[22676], 60.00th=[25035], 00:18:30.389 | 70.00th=[26608], 80.00th=[28443], 90.00th=[32113], 95.00th=[33162], 00:18:30.389 | 99.00th=[38536], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:18:30.389 | 99.99th=[41681] 00:18:30.389 bw ( KiB/s): min= 8576, max=12312, per=16.34%, avg=10444.00, stdev=2641.75, samples=2 00:18:30.389 iops : min= 2144, max= 3078, avg=2611.00, stdev=660.44, samples=2 00:18:30.389 lat (msec) : 2=0.02%, 4=0.09%, 10=0.66%, 20=13.14%, 50=86.08% 00:18:30.389 cpu : usr=2.29%, sys=8.67%, ctx=367, majf=0, minf=11 00:18:30.389 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:30.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:30.389 issued rwts: total=2560,2736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.389 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:30.389 job1: (groupid=0, jobs=1): err= 0: pid=74477: Wed May 15 02:19:18 2024 00:18:30.389 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:18:30.389 slat (usec): min=3, max=5423, avg=162.61, stdev=750.38 00:18:30.389 clat (usec): min=3838, max=29706, avg=20669.51, stdev=3039.14 00:18:30.389 lat (usec): min=3852, max=30404, avg=20832.13, stdev=2989.43 00:18:30.389 clat percentiles (usec): 00:18:30.389 | 1.00th=[ 8717], 5.00th=[16450], 10.00th=[17171], 20.00th=[19006], 00:18:30.389 | 30.00th=[19530], 40.00th=[20055], 50.00th=[20579], 60.00th=[21890], 00:18:30.389 | 70.00th=[22414], 80.00th=[22938], 90.00th=[23725], 95.00th=[24773], 00:18:30.389 | 99.00th=[26084], 99.50th=[27395], 99.90th=[29754], 99.95th=[29754], 00:18:30.389 | 99.99th=[29754] 00:18:30.389 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1004msec); 0 zone resets 00:18:30.389 slat (usec): min=9, max=6999, avg=153.53, stdev=609.75 00:18:30.389 clat (usec): min=3526, max=34307, avg=20510.87, stdev=4907.85 00:18:30.389 lat (usec): min=3552, max=34353, avg=20664.41, stdev=4914.25 00:18:30.389 clat percentiles (usec): 00:18:30.389 | 1.00th=[12518], 5.00th=[15008], 10.00th=[15664], 20.00th=[16319], 00:18:30.389 | 30.00th=[16712], 40.00th=[17695], 50.00th=[19792], 60.00th=[21890], 00:18:30.389 | 70.00th=[22414], 80.00th=[23987], 90.00th=[26870], 95.00th=[31589], 00:18:30.389 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:18:30.389 | 99.99th=[34341] 00:18:30.389 bw ( KiB/s): min=12288, max=12288, per=19.22%, avg=12288.00, stdev= 0.00, samples=2 00:18:30.389 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:18:30.389 lat (msec) : 4=0.26%, 10=0.65%, 20=43.59%, 50=55.50% 00:18:30.389 cpu : usr=2.69%, sys=9.87%, ctx=337, majf=0, minf=15 00:18:30.389 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:30.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:30.389 issued rwts: total=3072,3081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.389 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:30.389 job2: (groupid=0, jobs=1): err= 0: pid=74482: Wed May 15 02:19:18 2024 00:18:30.389 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:18:30.389 slat (usec): min=7, max=4014, avg=101.75, stdev=508.65 00:18:30.389 clat (usec): min=9874, max=17491, avg=13261.70, stdev=1102.94 00:18:30.389 lat (usec): min=9899, max=17965, avg=13363.45, stdev=1149.09 00:18:30.389 clat percentiles (usec): 00:18:30.389 | 1.00th=[10159], 5.00th=[11076], 10.00th=[11731], 20.00th=[12780], 00:18:30.389 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:18:30.389 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14353], 95.00th=[15008], 00:18:30.389 | 99.00th=[16450], 99.50th=[16712], 99.90th=[17171], 99.95th=[17171], 00:18:30.389 | 99.99th=[17433] 00:18:30.389 write: IOPS=5103, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:18:30.389 slat (usec): min=10, max=3832, avg=96.02, stdev=403.78 00:18:30.389 clat (usec): min=405, max=17296, avg=12783.94, stdev=1595.57 00:18:30.389 lat (usec): min=3294, max=17320, avg=12879.96, stdev=1577.32 00:18:30.389 clat percentiles (usec): 00:18:30.389 | 1.00th=[ 8225], 5.00th=[10028], 10.00th=[10421], 20.00th=[12125], 00:18:30.389 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:18:30.389 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14091], 95.00th=[14484], 00:18:30.389 | 99.00th=[15795], 99.50th=[16188], 99.90th=[16712], 99.95th=[16712], 00:18:30.389 | 99.99th=[17171] 00:18:30.389 bw ( KiB/s): min=20480, max=20480, per=32.04%, avg=20480.00, stdev= 0.00, samples=1 00:18:30.389 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:18:30.389 lat (usec) : 500=0.01% 00:18:30.389 lat (msec) : 4=0.20%, 10=2.20%, 20=97.59% 00:18:30.389 cpu : usr=3.40%, sys=15.80%, ctx=488, majf=0, minf=17 00:18:30.389 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:30.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:30.389 issued rwts: total=4608,5109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.389 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:30.389 job3: (groupid=0, jobs=1): err= 0: pid=74483: Wed May 15 02:19:18 2024 00:18:30.390 read: IOPS=4710, BW=18.4MiB/s (19.3MB/s)(18.4MiB/1002msec) 00:18:30.390 slat (usec): min=8, max=3254, avg=98.89, stdev=445.43 00:18:30.390 clat (usec): min=819, max=15951, avg=13023.52, stdev=1307.09 00:18:30.390 lat (usec): min=2211, max=18637, avg=13122.41, stdev=1256.85 00:18:30.390 clat percentiles (usec): 00:18:30.390 | 1.00th=[ 5473], 5.00th=[11207], 10.00th=[11863], 20.00th=[12649], 00:18:30.390 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:18:30.390 | 70.00th=[13435], 80.00th=[13698], 90.00th=[13960], 95.00th=[14222], 00:18:30.390 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15533], 99.95th=[15795], 00:18:30.390 | 99.99th=[15926] 00:18:30.390 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:18:30.390 slat (usec): min=9, max=3447, avg=97.08, stdev=408.09 00:18:30.390 clat (usec): min=9746, max=15816, avg=12694.39, stdev=1191.47 00:18:30.390 lat (usec): min=10063, max=15838, avg=12791.47, stdev=1174.55 00:18:30.390 clat percentiles (usec): 00:18:30.390 | 1.00th=[10290], 5.00th=[10683], 10.00th=[11207], 20.00th=[11469], 00:18:30.390 | 30.00th=[11863], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:18:30.390 | 70.00th=[13304], 80.00th=[13829], 90.00th=[14222], 95.00th=[14484], 00:18:30.390 | 99.00th=[15139], 99.50th=[15533], 99.90th=[15795], 99.95th=[15795], 00:18:30.390 | 99.99th=[15795] 00:18:30.390 bw ( KiB/s): min=20352, max=20521, per=31.97%, avg=20436.50, stdev=119.50, samples=2 00:18:30.390 iops : min= 5088, max= 5130, avg=5109.00, stdev=29.70, samples=2 00:18:30.390 lat (usec) : 1000=0.01% 00:18:30.390 lat (msec) : 4=0.30%, 10=0.56%, 20=99.13% 00:18:30.390 cpu : usr=4.10%, sys=13.09%, ctx=511, majf=0, minf=7 00:18:30.390 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:30.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:30.390 issued rwts: total=4720,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.390 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:30.390 00:18:30.390 Run status group 0 (all jobs): 00:18:30.390 READ: bw=58.2MiB/s (61.0MB/s), 9.96MiB/s-18.4MiB/s (10.4MB/s-19.3MB/s), io=58.4MiB (61.3MB), run=1001-1004msec 00:18:30.390 WRITE: bw=62.4MiB/s (65.5MB/s), 10.6MiB/s-20.0MiB/s (11.2MB/s-20.9MB/s), io=62.7MiB (65.7MB), run=1001-1004msec 00:18:30.390 00:18:30.390 Disk stats (read/write): 00:18:30.390 nvme0n1: ios=2098/2471, merge=0/0, ticks=11371/13914, in_queue=25285, util=86.75% 00:18:30.390 nvme0n2: ios=2564/2560, merge=0/0, ticks=12715/12340, in_queue=25055, util=87.68% 00:18:30.390 nvme0n3: ios=4051/4096, merge=0/0, ticks=16501/15396, in_queue=31897, util=88.46% 00:18:30.390 nvme0n4: ios=4096/4150, merge=0/0, ticks=12520/11554, in_queue=24074, util=89.50% 00:18:30.390 02:19:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:30.390 02:19:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=74491 00:18:30.390 02:19:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:30.390 02:19:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:30.390 [global] 00:18:30.390 thread=1 00:18:30.390 invalidate=1 00:18:30.390 rw=read 00:18:30.390 time_based=1 00:18:30.390 runtime=10 00:18:30.390 ioengine=libaio 00:18:30.390 direct=1 00:18:30.390 bs=4096 00:18:30.390 iodepth=1 00:18:30.390 norandommap=1 00:18:30.390 numjobs=1 00:18:30.390 00:18:30.390 [job0] 00:18:30.390 filename=/dev/nvme0n1 00:18:30.390 [job1] 00:18:30.390 filename=/dev/nvme0n2 00:18:30.390 [job2] 00:18:30.390 filename=/dev/nvme0n3 00:18:30.390 [job3] 00:18:30.390 filename=/dev/nvme0n4 00:18:30.390 Could not set queue depth (nvme0n1) 00:18:30.390 Could not set queue depth (nvme0n2) 00:18:30.390 Could not set queue depth (nvme0n3) 00:18:30.390 Could not set queue depth (nvme0n4) 00:18:30.390 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:30.390 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:30.390 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:30.390 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:30.390 fio-3.35 00:18:30.390 Starting 4 threads 00:18:33.699 02:19:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:33.699 fio: pid=74535, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:33.699 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=60309504, buflen=4096 00:18:33.699 02:19:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:33.699 fio: pid=74534, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:33.699 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=27586560, buflen=4096 00:18:33.699 02:19:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:33.699 02:19:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:34.265 fio: pid=74532, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:34.265 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=42373120, buflen=4096 00:18:34.265 02:19:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:34.265 02:19:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:34.265 fio: pid=74533, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:34.265 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=65974272, buflen=4096 00:18:34.524 02:19:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:34.524 02:19:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:34.524 00:18:34.524 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=74532: Wed May 15 02:19:22 2024 00:18:34.524 read: IOPS=2942, BW=11.5MiB/s (12.1MB/s)(40.4MiB/3516msec) 00:18:34.524 slat (usec): min=8, max=14450, avg=20.38, stdev=239.23 00:18:34.524 clat (usec): min=130, max=4303, avg=317.87, stdev=162.49 00:18:34.524 lat (usec): min=143, max=14773, avg=338.25, stdev=289.80 00:18:34.524 clat percentiles (usec): 00:18:34.524 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:18:34.524 | 30.00th=[ 165], 40.00th=[ 176], 50.00th=[ 326], 60.00th=[ 441], 00:18:34.524 | 70.00th=[ 461], 80.00th=[ 474], 90.00th=[ 490], 95.00th=[ 506], 00:18:34.524 | 99.00th=[ 570], 99.50th=[ 676], 99.90th=[ 775], 99.95th=[ 881], 00:18:34.524 | 99.99th=[ 3589] 00:18:34.524 bw ( KiB/s): min= 7920, max=22112, per=23.50%, avg=11919.83, stdev=6148.88, samples=6 00:18:34.524 iops : min= 1980, max= 5528, avg=2979.83, stdev=1537.31, samples=6 00:18:34.524 lat (usec) : 250=46.20%, 500=47.68%, 750=5.95%, 1000=0.12% 00:18:34.524 lat (msec) : 2=0.01%, 4=0.02%, 10=0.01% 00:18:34.524 cpu : usr=0.88%, sys=4.21%, ctx=10355, majf=0, minf=1 00:18:34.524 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.524 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.524 issued rwts: total=10346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.524 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.524 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=74533: Wed May 15 02:19:22 2024 00:18:34.524 read: IOPS=4263, BW=16.7MiB/s (17.5MB/s)(62.9MiB/3778msec) 00:18:34.524 slat (usec): min=12, max=11775, avg=19.53, stdev=180.85 00:18:34.524 clat (usec): min=124, max=3408, avg=213.46, stdev=71.39 00:18:34.524 lat (usec): min=138, max=11963, avg=232.99, stdev=195.04 00:18:34.524 clat percentiles (usec): 00:18:34.524 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 145], 20.00th=[ 155], 00:18:34.524 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 184], 60.00th=[ 239], 00:18:34.524 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 310], 00:18:34.524 | 99.00th=[ 343], 99.50th=[ 412], 99.90th=[ 502], 99.95th=[ 709], 00:18:34.524 | 99.99th=[ 1516] 00:18:34.524 bw ( KiB/s): min=15512, max=18385, per=32.84%, avg=16657.29, stdev=1008.72, samples=7 00:18:34.524 iops : min= 3878, max= 4596, avg=4164.29, stdev=252.11, samples=7 00:18:34.524 lat (usec) : 250=64.19%, 500=35.70%, 750=0.06%, 1000=0.01% 00:18:34.524 lat (msec) : 2=0.03%, 4=0.01% 00:18:34.524 cpu : usr=1.16%, sys=5.64%, ctx=16118, majf=0, minf=1 00:18:34.524 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.524 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.524 issued rwts: total=16108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.524 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.524 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=74534: Wed May 15 02:19:22 2024 00:18:34.524 read: IOPS=2095, BW=8382KiB/s (8583kB/s)(26.3MiB/3214msec) 00:18:34.524 slat (usec): min=9, max=9711, avg=28.09, stdev=155.43 00:18:34.524 clat (usec): min=168, max=2074, avg=446.24, stdev=55.28 00:18:34.524 lat (usec): min=217, max=10010, avg=474.32, stdev=162.30 00:18:34.524 clat percentiles (usec): 00:18:34.524 | 1.00th=[ 247], 5.00th=[ 383], 10.00th=[ 404], 20.00th=[ 420], 00:18:34.524 | 30.00th=[ 433], 40.00th=[ 441], 50.00th=[ 449], 60.00th=[ 457], 00:18:34.524 | 70.00th=[ 465], 80.00th=[ 474], 90.00th=[ 486], 95.00th=[ 502], 00:18:34.524 | 99.00th=[ 635], 99.50th=[ 693], 99.90th=[ 766], 99.95th=[ 848], 00:18:34.524 | 99.99th=[ 2073] 00:18:34.524 bw ( KiB/s): min= 7912, max= 8864, per=16.57%, avg=8403.17, stdev=369.09, samples=6 00:18:34.524 iops : min= 1978, max= 2216, avg=2100.67, stdev=92.15, samples=6 00:18:34.524 lat (usec) : 250=1.19%, 500=93.79%, 750=4.84%, 1000=0.13% 00:18:34.524 lat (msec) : 2=0.01%, 4=0.01% 00:18:34.524 cpu : usr=1.46%, sys=4.51%, ctx=6748, majf=0, minf=1 00:18:34.524 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.524 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.524 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.524 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.524 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=74535: Wed May 15 02:19:22 2024 00:18:34.524 read: IOPS=4966, BW=19.4MiB/s (20.3MB/s)(57.5MiB/2965msec) 00:18:34.524 slat (usec): min=12, max=103, avg=18.22, stdev= 5.56 00:18:34.524 clat (usec): min=147, max=7561, avg=181.45, stdev=113.76 00:18:34.524 lat (usec): min=161, max=7587, avg=199.67, stdev=114.45 00:18:34.524 clat percentiles (usec): 00:18:34.524 | 1.00th=[ 157], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 165], 00:18:34.524 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:18:34.524 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 204], 95.00th=[ 223], 00:18:34.524 | 99.00th=[ 262], 99.50th=[ 281], 99.90th=[ 412], 99.95th=[ 1074], 00:18:34.524 | 99.99th=[ 7439] 00:18:34.524 bw ( KiB/s): min=17832, max=21056, per=38.89%, avg=19728.00, stdev=1325.97, samples=5 00:18:34.524 iops : min= 4458, max= 5264, avg=4932.00, stdev=331.49, samples=5 00:18:34.524 lat (usec) : 250=98.46%, 500=1.46%, 750=0.01%, 1000=0.01% 00:18:34.524 lat (msec) : 2=0.02%, 4=0.01%, 10=0.02% 00:18:34.524 cpu : usr=1.42%, sys=7.59%, ctx=14727, majf=0, minf=1 00:18:34.524 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.524 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.524 issued rwts: total=14725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.524 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.524 00:18:34.524 Run status group 0 (all jobs): 00:18:34.524 READ: bw=49.5MiB/s (51.9MB/s), 8382KiB/s-19.4MiB/s (8583kB/s-20.3MB/s), io=187MiB (196MB), run=2965-3778msec 00:18:34.524 00:18:34.524 Disk stats (read/write): 00:18:34.524 nvme0n1: ios=9883/0, merge=0/0, ticks=3020/0, in_queue=3020, util=95.02% 00:18:34.524 nvme0n2: ios=15102/0, merge=0/0, ticks=3345/0, in_queue=3345, util=95.50% 00:18:34.524 nvme0n3: ios=6526/0, merge=0/0, ticks=2955/0, in_queue=2955, util=96.34% 00:18:34.524 nvme0n4: ios=14238/0, merge=0/0, ticks=2642/0, in_queue=2642, util=96.42% 00:18:34.783 02:19:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:34.783 02:19:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:35.042 02:19:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:35.042 02:19:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:35.300 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:35.300 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:35.558 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:35.558 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:35.816 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:35.816 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 74491 00:18:35.816 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:35.816 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:35.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:35.816 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:35.816 02:19:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:18:35.816 02:19:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:35.816 02:19:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:35.816 02:19:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:35.816 02:19:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:35.816 nvmf hotplug test: fio failed as expected 00:18:35.816 02:19:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:18:35.816 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:35.816 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:35.816 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:36.075 rmmod nvme_tcp 00:18:36.075 rmmod nvme_fabrics 00:18:36.075 rmmod nvme_keyring 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 74073 ']' 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 74073 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 74073 ']' 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 74073 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:36.075 02:19:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74073 00:18:36.075 02:19:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:36.075 killing process with pid 74073 00:18:36.075 02:19:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:36.075 02:19:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74073' 00:18:36.075 02:19:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 74073 00:18:36.075 [2024-05-15 02:19:24.015932] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:36.075 02:19:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 74073 00:18:36.334 02:19:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:36.334 02:19:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:36.334 02:19:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:36.334 02:19:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.334 02:19:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:36.334 02:19:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.334 02:19:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.334 02:19:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.334 02:19:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:36.334 ************************************ 00:18:36.334 END TEST nvmf_fio_target 00:18:36.334 ************************************ 00:18:36.334 00:18:36.334 real 0m19.496s 00:18:36.334 user 1m15.713s 00:18:36.334 sys 0m8.426s 00:18:36.334 02:19:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:36.334 02:19:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.334 02:19:24 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:36.334 02:19:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:36.334 02:19:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:36.334 02:19:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:36.334 ************************************ 00:18:36.334 START TEST nvmf_bdevio 00:18:36.334 ************************************ 00:18:36.334 02:19:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:36.593 * Looking for test storage... 00:18:36.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:36.593 Cannot find device "nvmf_tgt_br" 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:36.593 Cannot find device "nvmf_tgt_br2" 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:36.593 Cannot find device "nvmf_tgt_br" 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:36.593 Cannot find device "nvmf_tgt_br2" 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:36.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:36.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:36.593 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:36.594 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:36.594 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:36.594 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:36.594 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:36.594 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:36.594 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:36.594 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:36.594 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:36.594 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:36.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:18:36.853 00:18:36.853 --- 10.0.0.2 ping statistics --- 00:18:36.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.853 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:36.853 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:36.853 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:18:36.853 00:18:36.853 --- 10.0.0.3 ping statistics --- 00:18:36.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.853 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:36.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:36.853 00:18:36.853 --- 10.0.0.1 ping statistics --- 00:18:36.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.853 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=74814 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 74814 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 74814 ']' 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:36.853 02:19:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:36.853 [2024-05-15 02:19:24.782446] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:36.854 [2024-05-15 02:19:24.782526] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.112 [2024-05-15 02:19:24.915637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:37.112 [2024-05-15 02:19:24.986210] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.112 [2024-05-15 02:19:24.986271] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.112 [2024-05-15 02:19:24.986292] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.112 [2024-05-15 02:19:24.986310] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.112 [2024-05-15 02:19:24.986323] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.112 [2024-05-15 02:19:24.986523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:37.112 [2024-05-15 02:19:24.987071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:37.112 [2024-05-15 02:19:24.987228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:37.112 [2024-05-15 02:19:24.987234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:38.051 [2024-05-15 02:19:25.815945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:38.051 Malloc0 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:38.051 [2024-05-15 02:19:25.879177] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:38.051 [2024-05-15 02:19:25.879715] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.051 { 00:18:38.051 "params": { 00:18:38.051 "name": "Nvme$subsystem", 00:18:38.051 "trtype": "$TEST_TRANSPORT", 00:18:38.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.051 "adrfam": "ipv4", 00:18:38.051 "trsvcid": "$NVMF_PORT", 00:18:38.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.051 "hdgst": ${hdgst:-false}, 00:18:38.051 "ddgst": ${ddgst:-false} 00:18:38.051 }, 00:18:38.051 "method": "bdev_nvme_attach_controller" 00:18:38.051 } 00:18:38.051 EOF 00:18:38.051 )") 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:38.051 02:19:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:38.051 "params": { 00:18:38.051 "name": "Nvme1", 00:18:38.051 "trtype": "tcp", 00:18:38.051 "traddr": "10.0.0.2", 00:18:38.051 "adrfam": "ipv4", 00:18:38.051 "trsvcid": "4420", 00:18:38.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.051 "hdgst": false, 00:18:38.051 "ddgst": false 00:18:38.051 }, 00:18:38.051 "method": "bdev_nvme_attach_controller" 00:18:38.051 }' 00:18:38.051 [2024-05-15 02:19:25.936585] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:38.051 [2024-05-15 02:19:25.936666] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74864 ] 00:18:38.348 [2024-05-15 02:19:26.076720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:38.348 [2024-05-15 02:19:26.148670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.348 [2024-05-15 02:19:26.148779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.348 [2024-05-15 02:19:26.148784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.348 I/O targets: 00:18:38.348 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:38.348 00:18:38.348 00:18:38.349 CUnit - A unit testing framework for C - Version 2.1-3 00:18:38.349 http://cunit.sourceforge.net/ 00:18:38.349 00:18:38.349 00:18:38.349 Suite: bdevio tests on: Nvme1n1 00:18:38.642 Test: blockdev write read block ...passed 00:18:38.642 Test: blockdev write zeroes read block ...passed 00:18:38.642 Test: blockdev write zeroes read no split ...passed 00:18:38.642 Test: blockdev write zeroes read split ...passed 00:18:38.642 Test: blockdev write zeroes read split partial ...passed 00:18:38.642 Test: blockdev reset ...[2024-05-15 02:19:26.415832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:38.642 [2024-05-15 02:19:26.415977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bd660 (9): Bad file descriptor 00:18:38.642 [2024-05-15 02:19:26.436083] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:38.642 passed 00:18:38.642 Test: blockdev write read 8 blocks ...passed 00:18:38.642 Test: blockdev write read size > 128k ...passed 00:18:38.642 Test: blockdev write read invalid size ...passed 00:18:38.642 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:38.642 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:38.642 Test: blockdev write read max offset ...passed 00:18:38.642 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:38.642 Test: blockdev writev readv 8 blocks ...passed 00:18:38.642 Test: blockdev writev readv 30 x 1block ...passed 00:18:38.642 Test: blockdev writev readv block ...passed 00:18:38.642 Test: blockdev writev readv size > 128k ...passed 00:18:38.642 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:38.642 Test: blockdev comparev and writev ...[2024-05-15 02:19:26.606860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.642 [2024-05-15 02:19:26.606920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.642 [2024-05-15 02:19:26.606942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.642 [2024-05-15 02:19:26.606954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.642 [2024-05-15 02:19:26.607425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.642 [2024-05-15 02:19:26.607454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:38.642 [2024-05-15 02:19:26.607472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.642 [2024-05-15 02:19:26.607482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:38.642 [2024-05-15 02:19:26.607825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.643 [2024-05-15 02:19:26.607852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:38.643 [2024-05-15 02:19:26.607870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.643 [2024-05-15 02:19:26.607881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:38.643 [2024-05-15 02:19:26.608314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.643 [2024-05-15 02:19:26.608341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:38.643 [2024-05-15 02:19:26.608359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.643 [2024-05-15 02:19:26.608370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:38.643 passed 00:18:38.900 Test: blockdev nvme passthru rw ...passed 00:18:38.900 Test: blockdev nvme passthru vendor specific ...[2024-05-15 02:19:26.690805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:38.900 [2024-05-15 02:19:26.690858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:38.900 [2024-05-15 02:19:26.690991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:38.900 [2024-05-15 02:19:26.691008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:38.900 [2024-05-15 02:19:26.691120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:38.900 [2024-05-15 02:19:26.691148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:38.900 [2024-05-15 02:19:26.691263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:38.900 [2024-05-15 02:19:26.691287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:38.900 passed 00:18:38.900 Test: blockdev nvme admin passthru ...passed 00:18:38.900 Test: blockdev copy ...passed 00:18:38.900 00:18:38.900 Run Summary: Type Total Ran Passed Failed Inactive 00:18:38.900 suites 1 1 n/a 0 0 00:18:38.900 tests 23 23 23 0 0 00:18:38.900 asserts 152 152 152 0 n/a 00:18:38.900 00:18:38.900 Elapsed time = 0.903 seconds 00:18:39.157 02:19:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:39.157 02:19:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.157 02:19:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:39.157 02:19:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.157 02:19:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:39.157 02:19:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:39.157 02:19:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:39.157 02:19:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:39.157 02:19:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:39.157 02:19:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:39.157 02:19:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:39.157 02:19:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:39.157 rmmod nvme_tcp 00:18:39.157 rmmod nvme_fabrics 00:18:39.157 rmmod nvme_keyring 00:18:39.157 02:19:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:39.157 02:19:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:39.157 02:19:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:39.157 02:19:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 74814 ']' 00:18:39.157 02:19:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 74814 00:18:39.157 02:19:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 74814 ']' 00:18:39.157 02:19:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 74814 00:18:39.157 02:19:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:18:39.157 02:19:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:39.157 02:19:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74814 00:18:39.157 02:19:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:18:39.157 killing process with pid 74814 00:18:39.157 02:19:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:18:39.157 02:19:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74814' 00:18:39.157 02:19:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 74814 00:18:39.157 02:19:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 74814 00:18:39.157 [2024-05-15 02:19:27.057232] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:39.414 02:19:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:39.414 02:19:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:39.414 02:19:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:39.414 02:19:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:39.414 02:19:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:39.414 02:19:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.414 02:19:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.414 02:19:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.414 02:19:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:39.414 00:18:39.414 real 0m3.000s 00:18:39.414 user 0m10.856s 00:18:39.414 sys 0m0.678s 00:18:39.414 02:19:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:39.414 02:19:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:39.414 ************************************ 00:18:39.414 END TEST nvmf_bdevio 00:18:39.414 ************************************ 00:18:39.414 02:19:27 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:39.414 02:19:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:39.414 02:19:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:39.414 02:19:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:39.414 ************************************ 00:18:39.414 START TEST nvmf_auth_target 00:18:39.414 ************************************ 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:39.414 * Looking for test storage... 00:18:39.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.414 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:39.672 02:19:27 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.672 02:19:27 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.672 02:19:27 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.672 02:19:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:39.673 Cannot find device "nvmf_tgt_br" 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:39.673 Cannot find device "nvmf_tgt_br2" 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:39.673 Cannot find device "nvmf_tgt_br" 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:39.673 Cannot find device "nvmf_tgt_br2" 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:39.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:39.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:39.673 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:39.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:18:39.932 00:18:39.932 --- 10.0.0.2 ping statistics --- 00:18:39.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.932 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:39.932 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:39.932 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:18:39.932 00:18:39.932 --- 10.0.0.3 ping statistics --- 00:18:39.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.932 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:39.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:39.932 00:18:39.932 --- 10.0.0.1 ping statistics --- 00:18:39.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.932 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.932 02:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:39.933 02:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.933 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=75035 00:18:39.933 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 75035 00:18:39.933 02:19:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:39.933 02:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 75035 ']' 00:18:39.933 02:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.933 02:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:39.933 02:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.933 02:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:39.933 02:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=75073 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2e9265a7350b5de1dac329e909b426ad44e61b7b925be46e 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gO2 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2e9265a7350b5de1dac329e909b426ad44e61b7b925be46e 0 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2e9265a7350b5de1dac329e909b426ad44e61b7b925be46e 0 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2e9265a7350b5de1dac329e909b426ad44e61b7b925be46e 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:40.869 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gO2 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gO2 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.gO2 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7fa1ca09eafa407a02294cbd3b533f07 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dMo 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7fa1ca09eafa407a02294cbd3b533f07 1 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7fa1ca09eafa407a02294cbd3b533f07 1 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7fa1ca09eafa407a02294cbd3b533f07 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dMo 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dMo 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.dMo 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:41.128 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:41.129 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=06381e5e124adcbf949a4e351a4cd936602afde69fef8328 00:18:41.129 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:41.129 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4R3 00:18:41.129 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 06381e5e124adcbf949a4e351a4cd936602afde69fef8328 2 00:18:41.129 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 06381e5e124adcbf949a4e351a4cd936602afde69fef8328 2 00:18:41.129 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:41.129 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:41.129 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=06381e5e124adcbf949a4e351a4cd936602afde69fef8328 00:18:41.129 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:41.129 02:19:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4R3 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4R3 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.4R3 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1fac72399e94750316d10d4790685685d3d9f4db537c7ab5e256b99d6328f000 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.8Tp 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1fac72399e94750316d10d4790685685d3d9f4db537c7ab5e256b99d6328f000 3 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1fac72399e94750316d10d4790685685d3d9f4db537c7ab5e256b99d6328f000 3 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1fac72399e94750316d10d4790685685d3d9f4db537c7ab5e256b99d6328f000 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.8Tp 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.8Tp 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.8Tp 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 75035 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 75035 ']' 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:41.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:41.129 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.697 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:41.697 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:41.697 02:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 75073 /var/tmp/host.sock 00:18:41.697 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 75073 ']' 00:18:41.697 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:18:41.697 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:41.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:41.697 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:41.697 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:41.697 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.697 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:41.697 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:41.697 02:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:18:41.697 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.697 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.956 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.956 02:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:41.956 02:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gO2 00:18:41.956 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.956 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.956 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.956 02:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.gO2 00:18:41.956 02:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.gO2 00:18:41.956 02:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:41.956 02:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dMo 00:18:41.956 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.956 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.214 02:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.215 02:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dMo 00:18:42.215 02:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dMo 00:18:42.473 02:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:42.473 02:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4R3 00:18:42.473 02:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.473 02:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.473 02:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.473 02:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.4R3 00:18:42.473 02:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.4R3 00:18:42.732 02:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:42.733 02:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8Tp 00:18:42.733 02:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.733 02:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.733 02:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.733 02:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.8Tp 00:18:42.733 02:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.8Tp 00:18:43.001 02:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:18:43.001 02:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.001 02:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:43.001 02:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:43.001 02:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:43.001 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:18:43.001 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:43.001 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:43.001 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:43.001 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:43.001 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:18:43.001 02:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.001 02:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.290 02:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.290 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:43.290 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:43.548 00:18:43.548 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:43.548 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:43.548 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.807 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.807 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.807 02:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.807 02:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.807 02:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.807 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:43.807 { 00:18:43.807 "auth": { 00:18:43.807 "dhgroup": "null", 00:18:43.807 "digest": "sha256", 00:18:43.807 "state": "completed" 00:18:43.807 }, 00:18:43.807 "cntlid": 1, 00:18:43.807 "listen_address": { 00:18:43.807 "adrfam": "IPv4", 00:18:43.807 "traddr": "10.0.0.2", 00:18:43.807 "trsvcid": "4420", 00:18:43.807 "trtype": "TCP" 00:18:43.807 }, 00:18:43.807 "peer_address": { 00:18:43.807 "adrfam": "IPv4", 00:18:43.807 "traddr": "10.0.0.1", 00:18:43.807 "trsvcid": "44580", 00:18:43.807 "trtype": "TCP" 00:18:43.807 }, 00:18:43.807 "qid": 0, 00:18:43.807 "state": "enabled" 00:18:43.807 } 00:18:43.807 ]' 00:18:43.807 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:43.807 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.807 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:43.807 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:43.807 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:43.807 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.807 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.807 02:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.065 02:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:49.346 02:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:49.346 00:18:49.346 02:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:49.346 02:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:49.346 02:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.604 02:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.604 02:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.604 02:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.604 02:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.604 02:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.604 02:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:49.604 { 00:18:49.604 "auth": { 00:18:49.604 "dhgroup": "null", 00:18:49.604 "digest": "sha256", 00:18:49.604 "state": "completed" 00:18:49.604 }, 00:18:49.604 "cntlid": 3, 00:18:49.604 "listen_address": { 00:18:49.604 "adrfam": "IPv4", 00:18:49.604 "traddr": "10.0.0.2", 00:18:49.604 "trsvcid": "4420", 00:18:49.604 "trtype": "TCP" 00:18:49.604 }, 00:18:49.604 "peer_address": { 00:18:49.604 "adrfam": "IPv4", 00:18:49.604 "traddr": "10.0.0.1", 00:18:49.604 "trsvcid": "53750", 00:18:49.604 "trtype": "TCP" 00:18:49.604 }, 00:18:49.604 "qid": 0, 00:18:49.604 "state": "enabled" 00:18:49.604 } 00:18:49.604 ]' 00:18:49.604 02:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:49.862 02:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.862 02:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:49.862 02:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:49.862 02:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:49.862 02:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.862 02:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.862 02:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.138 02:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:18:51.074 02:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.074 02:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:18:51.074 02:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.074 02:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.074 02:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.074 02:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:51.074 02:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:51.074 02:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:51.074 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:18:51.074 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:51.074 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.074 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:51.074 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:51.074 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:18:51.074 02:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.074 02:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.074 02:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.074 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:51.074 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:51.641 00:18:51.641 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:51.641 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.641 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:51.900 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.900 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.900 02:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.900 02:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.900 02:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.900 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:51.900 { 00:18:51.900 "auth": { 00:18:51.900 "dhgroup": "null", 00:18:51.900 "digest": "sha256", 00:18:51.900 "state": "completed" 00:18:51.900 }, 00:18:51.900 "cntlid": 5, 00:18:51.900 "listen_address": { 00:18:51.900 "adrfam": "IPv4", 00:18:51.900 "traddr": "10.0.0.2", 00:18:51.900 "trsvcid": "4420", 00:18:51.900 "trtype": "TCP" 00:18:51.900 }, 00:18:51.900 "peer_address": { 00:18:51.900 "adrfam": "IPv4", 00:18:51.900 "traddr": "10.0.0.1", 00:18:51.900 "trsvcid": "53778", 00:18:51.900 "trtype": "TCP" 00:18:51.900 }, 00:18:51.900 "qid": 0, 00:18:51.900 "state": "enabled" 00:18:51.900 } 00:18:51.900 ]' 00:18:51.900 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:51.900 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.900 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:51.900 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:51.900 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:51.900 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.900 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.900 02:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.158 02:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:18:53.093 02:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.093 02:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:18:53.093 02:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.093 02:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.093 02:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.093 02:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:53.094 02:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:53.094 02:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:53.352 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:18:53.352 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:53.352 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.352 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:53.352 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:53.352 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:18:53.352 02:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.352 02:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.352 02:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.352 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.352 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.611 00:18:53.611 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:53.611 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:53.611 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.870 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.870 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.870 02:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.870 02:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.870 02:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.870 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:53.870 { 00:18:53.870 "auth": { 00:18:53.870 "dhgroup": "null", 00:18:53.870 "digest": "sha256", 00:18:53.870 "state": "completed" 00:18:53.870 }, 00:18:53.870 "cntlid": 7, 00:18:53.870 "listen_address": { 00:18:53.870 "adrfam": "IPv4", 00:18:53.870 "traddr": "10.0.0.2", 00:18:53.870 "trsvcid": "4420", 00:18:53.870 "trtype": "TCP" 00:18:53.870 }, 00:18:53.870 "peer_address": { 00:18:53.870 "adrfam": "IPv4", 00:18:53.870 "traddr": "10.0.0.1", 00:18:53.870 "trsvcid": "53806", 00:18:53.870 "trtype": "TCP" 00:18:53.870 }, 00:18:53.870 "qid": 0, 00:18:53.870 "state": "enabled" 00:18:53.870 } 00:18:53.870 ]' 00:18:53.870 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:53.870 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.870 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:54.128 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:54.128 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:54.128 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.128 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.128 02:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.387 02:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:18:54.953 02:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.953 02:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:18:54.953 02:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.953 02:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.953 02:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.953 02:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.954 02:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:54.954 02:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.954 02:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:55.520 02:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:18:55.520 02:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:55.520 02:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.520 02:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:55.520 02:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:55.520 02:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:18:55.520 02:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.520 02:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.520 02:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.520 02:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:55.520 02:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:55.778 00:18:55.778 02:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:55.778 02:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.778 02:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:56.037 02:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.037 02:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.037 02:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.037 02:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.037 02:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.037 02:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:56.037 { 00:18:56.037 "auth": { 00:18:56.037 "dhgroup": "ffdhe2048", 00:18:56.037 "digest": "sha256", 00:18:56.037 "state": "completed" 00:18:56.037 }, 00:18:56.037 "cntlid": 9, 00:18:56.037 "listen_address": { 00:18:56.037 "adrfam": "IPv4", 00:18:56.037 "traddr": "10.0.0.2", 00:18:56.037 "trsvcid": "4420", 00:18:56.037 "trtype": "TCP" 00:18:56.037 }, 00:18:56.037 "peer_address": { 00:18:56.037 "adrfam": "IPv4", 00:18:56.037 "traddr": "10.0.0.1", 00:18:56.037 "trsvcid": "58326", 00:18:56.037 "trtype": "TCP" 00:18:56.037 }, 00:18:56.037 "qid": 0, 00:18:56.037 "state": "enabled" 00:18:56.037 } 00:18:56.037 ]' 00:18:56.037 02:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:56.037 02:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.037 02:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:56.037 02:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:56.295 02:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:56.295 02:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.295 02:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.295 02:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.554 02:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:18:57.121 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.121 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:18:57.121 02:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.121 02:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.121 02:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.121 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:57.121 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:57.121 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:57.379 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:18:57.379 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:57.379 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:57.379 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:57.379 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:57.379 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:18:57.379 02:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.379 02:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.379 02:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.379 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:57.379 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:57.947 00:18:57.947 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:57.947 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.947 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:57.947 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.947 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.947 02:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.947 02:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.205 02:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.206 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:58.206 { 00:18:58.206 "auth": { 00:18:58.206 "dhgroup": "ffdhe2048", 00:18:58.206 "digest": "sha256", 00:18:58.206 "state": "completed" 00:18:58.206 }, 00:18:58.206 "cntlid": 11, 00:18:58.206 "listen_address": { 00:18:58.206 "adrfam": "IPv4", 00:18:58.206 "traddr": "10.0.0.2", 00:18:58.206 "trsvcid": "4420", 00:18:58.206 "trtype": "TCP" 00:18:58.206 }, 00:18:58.206 "peer_address": { 00:18:58.206 "adrfam": "IPv4", 00:18:58.206 "traddr": "10.0.0.1", 00:18:58.206 "trsvcid": "58362", 00:18:58.206 "trtype": "TCP" 00:18:58.206 }, 00:18:58.206 "qid": 0, 00:18:58.206 "state": "enabled" 00:18:58.206 } 00:18:58.206 ]' 00:18:58.206 02:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:58.206 02:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.206 02:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:58.206 02:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:58.206 02:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:58.206 02:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.206 02:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.206 02:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.464 02:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:18:59.399 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.399 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:18:59.399 02:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.399 02:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.399 02:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.399 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:59.399 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:59.399 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:59.658 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:18:59.658 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:59.658 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.658 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:59.658 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:59.658 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:18:59.658 02:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.658 02:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.658 02:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.658 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:59.658 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:59.918 00:18:59.918 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:59.918 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.918 02:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:00.176 02:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.176 02:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.176 02:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.176 02:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.176 02:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.176 02:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:00.176 { 00:19:00.176 "auth": { 00:19:00.176 "dhgroup": "ffdhe2048", 00:19:00.176 "digest": "sha256", 00:19:00.176 "state": "completed" 00:19:00.176 }, 00:19:00.176 "cntlid": 13, 00:19:00.176 "listen_address": { 00:19:00.176 "adrfam": "IPv4", 00:19:00.176 "traddr": "10.0.0.2", 00:19:00.176 "trsvcid": "4420", 00:19:00.176 "trtype": "TCP" 00:19:00.176 }, 00:19:00.176 "peer_address": { 00:19:00.177 "adrfam": "IPv4", 00:19:00.177 "traddr": "10.0.0.1", 00:19:00.177 "trsvcid": "58384", 00:19:00.177 "trtype": "TCP" 00:19:00.177 }, 00:19:00.177 "qid": 0, 00:19:00.177 "state": "enabled" 00:19:00.177 } 00:19:00.177 ]' 00:19:00.177 02:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:00.177 02:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.177 02:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:00.435 02:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:00.435 02:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:00.435 02:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.435 02:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.435 02:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.693 02:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:19:01.259 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.259 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:01.259 02:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.259 02:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.259 02:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.259 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:01.259 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:01.259 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:01.542 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:19:01.542 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:01.542 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.542 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:01.542 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:01.542 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:19:01.542 02:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.542 02:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.542 02:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.542 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.542 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.110 00:19:02.110 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:02.110 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:02.110 02:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.482 02:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.482 02:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.482 02:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.482 02:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.482 02:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.482 02:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:02.482 { 00:19:02.482 "auth": { 00:19:02.482 "dhgroup": "ffdhe2048", 00:19:02.482 "digest": "sha256", 00:19:02.482 "state": "completed" 00:19:02.482 }, 00:19:02.482 "cntlid": 15, 00:19:02.482 "listen_address": { 00:19:02.482 "adrfam": "IPv4", 00:19:02.482 "traddr": "10.0.0.2", 00:19:02.482 "trsvcid": "4420", 00:19:02.482 "trtype": "TCP" 00:19:02.482 }, 00:19:02.482 "peer_address": { 00:19:02.482 "adrfam": "IPv4", 00:19:02.482 "traddr": "10.0.0.1", 00:19:02.482 "trsvcid": "58406", 00:19:02.482 "trtype": "TCP" 00:19:02.482 }, 00:19:02.482 "qid": 0, 00:19:02.482 "state": "enabled" 00:19:02.482 } 00:19:02.482 ]' 00:19:02.482 02:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:02.482 02:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.483 02:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:02.483 02:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.483 02:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:02.483 02:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.483 02:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.483 02:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.741 02:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:19:03.308 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.308 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:03.308 02:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.308 02:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.308 02:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.308 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.308 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:03.308 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:03.308 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:03.873 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:19:03.873 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:03.873 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:03.873 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:03.873 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:03.873 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:19:03.873 02:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.873 02:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.873 02:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.873 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:03.873 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:04.132 00:19:04.132 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:04.132 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:04.132 02:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.391 02:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.391 02:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.391 02:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.391 02:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.391 02:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.391 02:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:04.391 { 00:19:04.391 "auth": { 00:19:04.391 "dhgroup": "ffdhe3072", 00:19:04.391 "digest": "sha256", 00:19:04.391 "state": "completed" 00:19:04.391 }, 00:19:04.391 "cntlid": 17, 00:19:04.391 "listen_address": { 00:19:04.391 "adrfam": "IPv4", 00:19:04.391 "traddr": "10.0.0.2", 00:19:04.391 "trsvcid": "4420", 00:19:04.391 "trtype": "TCP" 00:19:04.391 }, 00:19:04.391 "peer_address": { 00:19:04.391 "adrfam": "IPv4", 00:19:04.391 "traddr": "10.0.0.1", 00:19:04.391 "trsvcid": "58424", 00:19:04.391 "trtype": "TCP" 00:19:04.391 }, 00:19:04.391 "qid": 0, 00:19:04.391 "state": "enabled" 00:19:04.391 } 00:19:04.391 ]' 00:19:04.391 02:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:04.391 02:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.391 02:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:04.391 02:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:04.391 02:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:04.649 02:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.649 02:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.649 02:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.908 02:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:05.841 02:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:06.108 00:19:06.108 02:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:06.108 02:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:06.108 02:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.386 02:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.386 02:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.386 02:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.386 02:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.644 02:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.644 02:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:06.644 { 00:19:06.644 "auth": { 00:19:06.644 "dhgroup": "ffdhe3072", 00:19:06.644 "digest": "sha256", 00:19:06.644 "state": "completed" 00:19:06.644 }, 00:19:06.644 "cntlid": 19, 00:19:06.644 "listen_address": { 00:19:06.644 "adrfam": "IPv4", 00:19:06.644 "traddr": "10.0.0.2", 00:19:06.644 "trsvcid": "4420", 00:19:06.644 "trtype": "TCP" 00:19:06.644 }, 00:19:06.644 "peer_address": { 00:19:06.644 "adrfam": "IPv4", 00:19:06.644 "traddr": "10.0.0.1", 00:19:06.644 "trsvcid": "48942", 00:19:06.644 "trtype": "TCP" 00:19:06.644 }, 00:19:06.644 "qid": 0, 00:19:06.644 "state": "enabled" 00:19:06.644 } 00:19:06.644 ]' 00:19:06.644 02:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:06.644 02:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.644 02:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:06.644 02:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:06.644 02:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:06.644 02:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.644 02:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.645 02:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.903 02:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:19:07.837 02:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.837 02:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:07.837 02:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.837 02:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.837 02:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.837 02:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:07.837 02:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:07.837 02:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:08.095 02:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:19:08.095 02:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:08.095 02:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:08.095 02:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:08.095 02:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:08.095 02:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:19:08.095 02:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.095 02:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.095 02:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.095 02:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:08.095 02:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:08.354 00:19:08.354 02:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:08.354 02:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:08.354 02:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.613 02:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.613 02:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.613 02:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.613 02:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.613 02:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.613 02:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:08.613 { 00:19:08.613 "auth": { 00:19:08.613 "dhgroup": "ffdhe3072", 00:19:08.613 "digest": "sha256", 00:19:08.613 "state": "completed" 00:19:08.613 }, 00:19:08.613 "cntlid": 21, 00:19:08.613 "listen_address": { 00:19:08.613 "adrfam": "IPv4", 00:19:08.613 "traddr": "10.0.0.2", 00:19:08.613 "trsvcid": "4420", 00:19:08.613 "trtype": "TCP" 00:19:08.613 }, 00:19:08.613 "peer_address": { 00:19:08.613 "adrfam": "IPv4", 00:19:08.613 "traddr": "10.0.0.1", 00:19:08.613 "trsvcid": "48970", 00:19:08.613 "trtype": "TCP" 00:19:08.613 }, 00:19:08.613 "qid": 0, 00:19:08.613 "state": "enabled" 00:19:08.613 } 00:19:08.613 ]' 00:19:08.613 02:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:08.871 02:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.871 02:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:08.871 02:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:08.871 02:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:08.871 02:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.871 02:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.871 02:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.130 02:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:19:10.064 02:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.064 02:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:10.064 02:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.064 02:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.064 02:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.064 02:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:10.064 02:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:10.064 02:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:10.064 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:19:10.064 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:10.064 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.064 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:10.064 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:10.064 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:19:10.064 02:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.064 02:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.064 02:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.064 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:10.064 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:10.632 00:19:10.632 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:10.632 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.632 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:10.890 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.891 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.891 02:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.891 02:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.891 02:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.891 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:10.891 { 00:19:10.891 "auth": { 00:19:10.891 "dhgroup": "ffdhe3072", 00:19:10.891 "digest": "sha256", 00:19:10.891 "state": "completed" 00:19:10.891 }, 00:19:10.891 "cntlid": 23, 00:19:10.891 "listen_address": { 00:19:10.891 "adrfam": "IPv4", 00:19:10.891 "traddr": "10.0.0.2", 00:19:10.891 "trsvcid": "4420", 00:19:10.891 "trtype": "TCP" 00:19:10.891 }, 00:19:10.891 "peer_address": { 00:19:10.891 "adrfam": "IPv4", 00:19:10.891 "traddr": "10.0.0.1", 00:19:10.891 "trsvcid": "48990", 00:19:10.891 "trtype": "TCP" 00:19:10.891 }, 00:19:10.891 "qid": 0, 00:19:10.891 "state": "enabled" 00:19:10.891 } 00:19:10.891 ]' 00:19:10.891 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:10.891 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.891 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:10.891 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:10.891 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:10.891 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.891 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.891 02:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.480 02:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:19:12.047 02:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.047 02:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:12.047 02:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.047 02:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.047 02:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.047 02:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.047 02:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:12.047 02:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:12.047 02:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:12.305 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:19:12.305 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:12.305 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:12.305 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:12.305 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:12.305 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:19:12.305 02:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.305 02:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.305 02:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.305 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:12.305 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:12.562 00:19:12.562 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:12.562 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:12.562 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.820 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.820 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.820 02:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.820 02:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.820 02:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.820 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:12.820 { 00:19:12.820 "auth": { 00:19:12.820 "dhgroup": "ffdhe4096", 00:19:12.820 "digest": "sha256", 00:19:12.820 "state": "completed" 00:19:12.820 }, 00:19:12.820 "cntlid": 25, 00:19:12.820 "listen_address": { 00:19:12.820 "adrfam": "IPv4", 00:19:12.820 "traddr": "10.0.0.2", 00:19:12.820 "trsvcid": "4420", 00:19:12.820 "trtype": "TCP" 00:19:12.820 }, 00:19:12.820 "peer_address": { 00:19:12.820 "adrfam": "IPv4", 00:19:12.820 "traddr": "10.0.0.1", 00:19:12.820 "trsvcid": "49018", 00:19:12.820 "trtype": "TCP" 00:19:12.820 }, 00:19:12.820 "qid": 0, 00:19:12.820 "state": "enabled" 00:19:12.820 } 00:19:12.820 ]' 00:19:12.820 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:13.078 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.078 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:13.079 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:13.079 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:13.079 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.079 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.079 02:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.337 02:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:19:14.269 02:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.270 02:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:14.270 02:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.270 02:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.270 02:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.270 02:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:14.270 02:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:14.270 02:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:14.270 02:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:19:14.270 02:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:14.270 02:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:14.270 02:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:14.270 02:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:14.270 02:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:19:14.270 02:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.270 02:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.270 02:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.270 02:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:14.270 02:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:14.835 00:19:14.835 02:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:14.835 02:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:14.835 02:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.093 02:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.093 02:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.093 02:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.093 02:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.093 02:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.093 02:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:15.093 { 00:19:15.093 "auth": { 00:19:15.093 "dhgroup": "ffdhe4096", 00:19:15.093 "digest": "sha256", 00:19:15.093 "state": "completed" 00:19:15.093 }, 00:19:15.093 "cntlid": 27, 00:19:15.093 "listen_address": { 00:19:15.093 "adrfam": "IPv4", 00:19:15.093 "traddr": "10.0.0.2", 00:19:15.093 "trsvcid": "4420", 00:19:15.093 "trtype": "TCP" 00:19:15.093 }, 00:19:15.093 "peer_address": { 00:19:15.093 "adrfam": "IPv4", 00:19:15.093 "traddr": "10.0.0.1", 00:19:15.093 "trsvcid": "43604", 00:19:15.093 "trtype": "TCP" 00:19:15.093 }, 00:19:15.093 "qid": 0, 00:19:15.093 "state": "enabled" 00:19:15.093 } 00:19:15.093 ]' 00:19:15.093 02:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:15.093 02:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.093 02:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:15.093 02:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:15.093 02:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:15.352 02:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.352 02:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.352 02:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.615 02:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:19:16.196 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.196 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:16.196 02:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.196 02:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.196 02:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.196 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:16.196 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:16.196 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:16.455 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:19:16.455 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:16.455 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:16.455 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:16.455 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:16.455 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:19:16.455 02:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.455 02:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.455 02:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.455 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:16.455 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:17.021 00:19:17.021 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:17.021 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:17.021 02:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.279 02:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.279 02:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.279 02:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.279 02:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.279 02:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.279 02:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:17.279 { 00:19:17.279 "auth": { 00:19:17.279 "dhgroup": "ffdhe4096", 00:19:17.279 "digest": "sha256", 00:19:17.279 "state": "completed" 00:19:17.279 }, 00:19:17.279 "cntlid": 29, 00:19:17.279 "listen_address": { 00:19:17.279 "adrfam": "IPv4", 00:19:17.279 "traddr": "10.0.0.2", 00:19:17.279 "trsvcid": "4420", 00:19:17.279 "trtype": "TCP" 00:19:17.279 }, 00:19:17.279 "peer_address": { 00:19:17.279 "adrfam": "IPv4", 00:19:17.279 "traddr": "10.0.0.1", 00:19:17.279 "trsvcid": "43634", 00:19:17.279 "trtype": "TCP" 00:19:17.279 }, 00:19:17.279 "qid": 0, 00:19:17.279 "state": "enabled" 00:19:17.279 } 00:19:17.279 ]' 00:19:17.279 02:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:17.279 02:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.279 02:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:17.279 02:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:17.279 02:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:17.279 02:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.279 02:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.279 02:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.846 02:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:19:18.413 02:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.413 02:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:18.413 02:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.413 02:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.413 02:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.413 02:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:18.413 02:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:18.413 02:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:18.980 02:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:19:18.980 02:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:18.980 02:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.980 02:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:18.980 02:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.980 02:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:19:18.980 02:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.980 02:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.980 02:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.980 02:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.980 02:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.238 00:19:19.238 02:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:19.238 02:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:19.238 02:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.497 02:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.497 02:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.497 02:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.497 02:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.497 02:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.497 02:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:19.497 { 00:19:19.497 "auth": { 00:19:19.497 "dhgroup": "ffdhe4096", 00:19:19.497 "digest": "sha256", 00:19:19.497 "state": "completed" 00:19:19.497 }, 00:19:19.497 "cntlid": 31, 00:19:19.497 "listen_address": { 00:19:19.497 "adrfam": "IPv4", 00:19:19.497 "traddr": "10.0.0.2", 00:19:19.497 "trsvcid": "4420", 00:19:19.497 "trtype": "TCP" 00:19:19.497 }, 00:19:19.497 "peer_address": { 00:19:19.497 "adrfam": "IPv4", 00:19:19.497 "traddr": "10.0.0.1", 00:19:19.497 "trsvcid": "43668", 00:19:19.497 "trtype": "TCP" 00:19:19.497 }, 00:19:19.497 "qid": 0, 00:19:19.497 "state": "enabled" 00:19:19.497 } 00:19:19.497 ]' 00:19:19.497 02:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:19.497 02:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.497 02:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:19.756 02:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:19.756 02:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:19.756 02:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.756 02:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.756 02:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.015 02:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:19:20.950 02:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.950 02:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:20.950 02:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.950 02:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.950 02:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.950 02:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.950 02:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:20.950 02:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.950 02:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:21.208 02:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:19:21.208 02:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:21.208 02:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.208 02:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:21.208 02:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:21.208 02:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:19:21.208 02:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.208 02:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.208 02:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.208 02:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:21.208 02:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:21.466 00:19:21.466 02:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:21.466 02:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.466 02:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:22.032 02:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.032 02:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.032 02:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.033 02:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.033 02:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.033 02:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:22.033 { 00:19:22.033 "auth": { 00:19:22.033 "dhgroup": "ffdhe6144", 00:19:22.033 "digest": "sha256", 00:19:22.033 "state": "completed" 00:19:22.033 }, 00:19:22.033 "cntlid": 33, 00:19:22.033 "listen_address": { 00:19:22.033 "adrfam": "IPv4", 00:19:22.033 "traddr": "10.0.0.2", 00:19:22.033 "trsvcid": "4420", 00:19:22.033 "trtype": "TCP" 00:19:22.033 }, 00:19:22.033 "peer_address": { 00:19:22.033 "adrfam": "IPv4", 00:19:22.033 "traddr": "10.0.0.1", 00:19:22.033 "trsvcid": "43694", 00:19:22.033 "trtype": "TCP" 00:19:22.033 }, 00:19:22.033 "qid": 0, 00:19:22.033 "state": "enabled" 00:19:22.033 } 00:19:22.033 ]' 00:19:22.033 02:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:22.033 02:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.033 02:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:22.033 02:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:22.033 02:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:22.033 02:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.033 02:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.033 02:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.291 02:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:19:23.227 02:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.227 02:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:23.227 02:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.227 02:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.227 02:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.227 02:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:23.227 02:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:23.227 02:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:23.485 02:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:19:23.485 02:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:23.485 02:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:23.485 02:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:23.485 02:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:23.485 02:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:19:23.485 02:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.485 02:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.485 02:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.485 02:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:23.485 02:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:24.053 00:19:24.053 02:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:24.053 02:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.053 02:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:24.053 02:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.053 02:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.053 02:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.053 02:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.053 02:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.053 02:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:24.053 { 00:19:24.053 "auth": { 00:19:24.053 "dhgroup": "ffdhe6144", 00:19:24.053 "digest": "sha256", 00:19:24.053 "state": "completed" 00:19:24.053 }, 00:19:24.053 "cntlid": 35, 00:19:24.053 "listen_address": { 00:19:24.053 "adrfam": "IPv4", 00:19:24.053 "traddr": "10.0.0.2", 00:19:24.053 "trsvcid": "4420", 00:19:24.053 "trtype": "TCP" 00:19:24.053 }, 00:19:24.053 "peer_address": { 00:19:24.053 "adrfam": "IPv4", 00:19:24.053 "traddr": "10.0.0.1", 00:19:24.053 "trsvcid": "43712", 00:19:24.053 "trtype": "TCP" 00:19:24.053 }, 00:19:24.053 "qid": 0, 00:19:24.053 "state": "enabled" 00:19:24.053 } 00:19:24.053 ]' 00:19:24.053 02:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:24.312 02:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.312 02:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:24.312 02:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:24.312 02:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:24.312 02:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.312 02:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.312 02:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.571 02:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:19:25.166 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.444 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:25.444 02:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.444 02:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.444 02:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.444 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:25.444 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:25.444 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:25.702 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:19:25.702 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:25.702 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:25.702 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:25.702 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:25.702 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:19:25.702 02:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.702 02:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.702 02:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.702 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:25.702 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:25.960 00:19:25.960 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:25.960 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:25.960 02:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.527 02:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.527 02:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.527 02:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.527 02:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.527 02:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.527 02:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:26.527 { 00:19:26.527 "auth": { 00:19:26.527 "dhgroup": "ffdhe6144", 00:19:26.527 "digest": "sha256", 00:19:26.527 "state": "completed" 00:19:26.527 }, 00:19:26.527 "cntlid": 37, 00:19:26.527 "listen_address": { 00:19:26.527 "adrfam": "IPv4", 00:19:26.527 "traddr": "10.0.0.2", 00:19:26.527 "trsvcid": "4420", 00:19:26.527 "trtype": "TCP" 00:19:26.527 }, 00:19:26.527 "peer_address": { 00:19:26.527 "adrfam": "IPv4", 00:19:26.527 "traddr": "10.0.0.1", 00:19:26.527 "trsvcid": "39762", 00:19:26.527 "trtype": "TCP" 00:19:26.527 }, 00:19:26.527 "qid": 0, 00:19:26.527 "state": "enabled" 00:19:26.527 } 00:19:26.527 ]' 00:19:26.527 02:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:26.527 02:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.527 02:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:26.527 02:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:26.527 02:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:26.527 02:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.527 02:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.527 02:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.785 02:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:19:27.352 02:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.352 02:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:27.352 02:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.352 02:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.352 02:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.352 02:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:27.352 02:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:27.352 02:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:27.918 02:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:19:27.918 02:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:27.918 02:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.918 02:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:27.918 02:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:27.918 02:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:19:27.918 02:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.918 02:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.918 02:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.918 02:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.918 02:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.206 00:19:28.206 02:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:28.206 02:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.206 02:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:28.465 02:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.465 02:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.465 02:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.465 02:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.465 02:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.465 02:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:28.465 { 00:19:28.465 "auth": { 00:19:28.465 "dhgroup": "ffdhe6144", 00:19:28.465 "digest": "sha256", 00:19:28.465 "state": "completed" 00:19:28.465 }, 00:19:28.465 "cntlid": 39, 00:19:28.465 "listen_address": { 00:19:28.465 "adrfam": "IPv4", 00:19:28.465 "traddr": "10.0.0.2", 00:19:28.465 "trsvcid": "4420", 00:19:28.465 "trtype": "TCP" 00:19:28.465 }, 00:19:28.465 "peer_address": { 00:19:28.465 "adrfam": "IPv4", 00:19:28.465 "traddr": "10.0.0.1", 00:19:28.465 "trsvcid": "39780", 00:19:28.465 "trtype": "TCP" 00:19:28.465 }, 00:19:28.465 "qid": 0, 00:19:28.465 "state": "enabled" 00:19:28.465 } 00:19:28.465 ]' 00:19:28.465 02:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:28.465 02:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.465 02:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:28.723 02:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:28.723 02:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:28.723 02:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.723 02:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.723 02:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.982 02:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:19:29.548 02:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.548 02:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:29.548 02:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.548 02:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.548 02:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.548 02:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.548 02:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:29.548 02:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:29.548 02:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:29.806 02:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:19:29.806 02:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:29.806 02:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:29.806 02:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:29.806 02:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:29.806 02:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:19:29.806 02:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.806 02:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.806 02:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.806 02:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:29.806 02:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:30.739 00:19:30.739 02:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:30.739 02:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:30.739 02:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.739 02:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.739 02:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.739 02:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.739 02:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.739 02:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.739 02:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:30.739 { 00:19:30.739 "auth": { 00:19:30.739 "dhgroup": "ffdhe8192", 00:19:30.739 "digest": "sha256", 00:19:30.739 "state": "completed" 00:19:30.739 }, 00:19:30.739 "cntlid": 41, 00:19:30.739 "listen_address": { 00:19:30.739 "adrfam": "IPv4", 00:19:30.739 "traddr": "10.0.0.2", 00:19:30.739 "trsvcid": "4420", 00:19:30.739 "trtype": "TCP" 00:19:30.739 }, 00:19:30.739 "peer_address": { 00:19:30.739 "adrfam": "IPv4", 00:19:30.739 "traddr": "10.0.0.1", 00:19:30.739 "trsvcid": "39798", 00:19:30.739 "trtype": "TCP" 00:19:30.739 }, 00:19:30.739 "qid": 0, 00:19:30.739 "state": "enabled" 00:19:30.739 } 00:19:30.739 ]' 00:19:30.739 02:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:30.997 02:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.997 02:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:30.997 02:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:30.997 02:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:30.997 02:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.997 02:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.997 02:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.255 02:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:19:32.188 02:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.188 02:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:32.188 02:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.188 02:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.188 02:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.188 02:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:32.188 02:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:32.188 02:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:32.188 02:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:19:32.188 02:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:32.188 02:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:32.188 02:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:32.188 02:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:32.188 02:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:19:32.188 02:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.188 02:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.188 02:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.188 02:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:32.188 02:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:33.129 00:19:33.129 02:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:33.129 02:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:33.129 02:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.129 02:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.129 02:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.129 02:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.129 02:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.396 02:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.396 02:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:33.396 { 00:19:33.396 "auth": { 00:19:33.396 "dhgroup": "ffdhe8192", 00:19:33.396 "digest": "sha256", 00:19:33.396 "state": "completed" 00:19:33.396 }, 00:19:33.396 "cntlid": 43, 00:19:33.396 "listen_address": { 00:19:33.396 "adrfam": "IPv4", 00:19:33.396 "traddr": "10.0.0.2", 00:19:33.396 "trsvcid": "4420", 00:19:33.396 "trtype": "TCP" 00:19:33.396 }, 00:19:33.396 "peer_address": { 00:19:33.396 "adrfam": "IPv4", 00:19:33.396 "traddr": "10.0.0.1", 00:19:33.396 "trsvcid": "39814", 00:19:33.396 "trtype": "TCP" 00:19:33.396 }, 00:19:33.396 "qid": 0, 00:19:33.396 "state": "enabled" 00:19:33.396 } 00:19:33.396 ]' 00:19:33.396 02:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:33.396 02:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.396 02:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:33.396 02:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:33.396 02:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:33.396 02:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.396 02:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.396 02:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.655 02:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:19:34.221 02:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.221 02:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:34.221 02:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.221 02:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.221 02:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.221 02:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:34.221 02:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:34.221 02:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:34.787 02:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:19:34.787 02:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:34.787 02:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:34.787 02:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:34.787 02:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:34.787 02:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:19:34.787 02:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.787 02:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.787 02:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.787 02:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:34.787 02:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:35.353 00:19:35.353 02:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:35.353 02:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.353 02:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:35.622 02:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.622 02:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.622 02:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.622 02:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.622 02:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.622 02:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:35.622 { 00:19:35.622 "auth": { 00:19:35.622 "dhgroup": "ffdhe8192", 00:19:35.622 "digest": "sha256", 00:19:35.622 "state": "completed" 00:19:35.622 }, 00:19:35.622 "cntlid": 45, 00:19:35.623 "listen_address": { 00:19:35.623 "adrfam": "IPv4", 00:19:35.623 "traddr": "10.0.0.2", 00:19:35.623 "trsvcid": "4420", 00:19:35.623 "trtype": "TCP" 00:19:35.623 }, 00:19:35.623 "peer_address": { 00:19:35.623 "adrfam": "IPv4", 00:19:35.623 "traddr": "10.0.0.1", 00:19:35.623 "trsvcid": "50558", 00:19:35.623 "trtype": "TCP" 00:19:35.623 }, 00:19:35.623 "qid": 0, 00:19:35.623 "state": "enabled" 00:19:35.623 } 00:19:35.623 ]' 00:19:35.623 02:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:35.623 02:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.623 02:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:35.623 02:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:35.623 02:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:35.623 02:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.623 02:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.623 02:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.193 02:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:19:36.758 02:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.758 02:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:36.758 02:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.758 02:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.758 02:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.758 02:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:36.758 02:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:36.759 02:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:37.017 02:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:19:37.017 02:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:37.017 02:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.017 02:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:37.017 02:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.017 02:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:19:37.017 02:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.017 02:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.017 02:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.017 02:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.017 02:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.951 00:19:37.951 02:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:37.951 02:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:37.951 02:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.951 02:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.951 02:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.951 02:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.951 02:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.951 02:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.951 02:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:37.951 { 00:19:37.951 "auth": { 00:19:37.951 "dhgroup": "ffdhe8192", 00:19:37.951 "digest": "sha256", 00:19:37.951 "state": "completed" 00:19:37.951 }, 00:19:37.951 "cntlid": 47, 00:19:37.951 "listen_address": { 00:19:37.951 "adrfam": "IPv4", 00:19:37.951 "traddr": "10.0.0.2", 00:19:37.951 "trsvcid": "4420", 00:19:37.951 "trtype": "TCP" 00:19:37.951 }, 00:19:37.951 "peer_address": { 00:19:37.951 "adrfam": "IPv4", 00:19:37.951 "traddr": "10.0.0.1", 00:19:37.951 "trsvcid": "50578", 00:19:37.951 "trtype": "TCP" 00:19:37.951 }, 00:19:37.951 "qid": 0, 00:19:37.951 "state": "enabled" 00:19:37.951 } 00:19:37.951 ]' 00:19:37.951 02:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:38.210 02:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.210 02:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:38.210 02:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.210 02:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:38.210 02:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.210 02:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.210 02:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.468 02:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:19:39.407 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.407 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:39.407 02:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.407 02:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.407 02:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.407 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:19:39.407 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.407 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:39.407 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.407 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.666 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:19:39.666 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:39.666 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:39.666 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:39.666 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.666 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:19:39.666 02:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.666 02:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.666 02:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.666 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:39.666 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:39.925 00:19:39.925 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:39.925 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:39.925 02:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.184 02:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.184 02:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.184 02:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.184 02:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.184 02:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.184 02:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:40.184 { 00:19:40.184 "auth": { 00:19:40.184 "dhgroup": "null", 00:19:40.184 "digest": "sha384", 00:19:40.184 "state": "completed" 00:19:40.184 }, 00:19:40.184 "cntlid": 49, 00:19:40.184 "listen_address": { 00:19:40.184 "adrfam": "IPv4", 00:19:40.184 "traddr": "10.0.0.2", 00:19:40.184 "trsvcid": "4420", 00:19:40.184 "trtype": "TCP" 00:19:40.184 }, 00:19:40.184 "peer_address": { 00:19:40.184 "adrfam": "IPv4", 00:19:40.184 "traddr": "10.0.0.1", 00:19:40.184 "trsvcid": "50608", 00:19:40.184 "trtype": "TCP" 00:19:40.184 }, 00:19:40.184 "qid": 0, 00:19:40.184 "state": "enabled" 00:19:40.184 } 00:19:40.184 ]' 00:19:40.184 02:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:40.184 02:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.184 02:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:40.184 02:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:40.184 02:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:40.443 02:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.443 02:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.443 02:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.702 02:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:19:41.269 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.269 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:41.269 02:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.269 02:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.269 02:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.269 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:41.269 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:41.269 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:41.836 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:19:41.836 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:41.836 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:41.837 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:41.837 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.837 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:19:41.837 02:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.837 02:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.837 02:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.837 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:41.837 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:42.094 00:19:42.094 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:42.094 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:42.094 02:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.352 02:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.352 02:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.352 02:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.352 02:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.352 02:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.352 02:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:42.352 { 00:19:42.352 "auth": { 00:19:42.352 "dhgroup": "null", 00:19:42.352 "digest": "sha384", 00:19:42.352 "state": "completed" 00:19:42.352 }, 00:19:42.352 "cntlid": 51, 00:19:42.352 "listen_address": { 00:19:42.352 "adrfam": "IPv4", 00:19:42.352 "traddr": "10.0.0.2", 00:19:42.352 "trsvcid": "4420", 00:19:42.352 "trtype": "TCP" 00:19:42.352 }, 00:19:42.352 "peer_address": { 00:19:42.352 "adrfam": "IPv4", 00:19:42.352 "traddr": "10.0.0.1", 00:19:42.352 "trsvcid": "50634", 00:19:42.352 "trtype": "TCP" 00:19:42.352 }, 00:19:42.352 "qid": 0, 00:19:42.352 "state": "enabled" 00:19:42.352 } 00:19:42.352 ]' 00:19:42.352 02:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:42.352 02:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.352 02:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:42.352 02:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:42.352 02:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:42.352 02:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.352 02:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.352 02:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.921 02:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:19:43.489 02:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.489 02:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:43.489 02:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.489 02:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.489 02:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.489 02:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:43.489 02:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:43.489 02:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:43.749 02:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:19:43.749 02:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:43.749 02:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:43.749 02:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:43.749 02:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:43.749 02:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:19:43.749 02:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.749 02:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.749 02:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.749 02:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:43.749 02:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:44.008 00:19:44.267 02:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:44.267 02:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.267 02:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:44.525 02:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.525 02:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.525 02:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.525 02:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.525 02:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.525 02:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:44.525 { 00:19:44.525 "auth": { 00:19:44.525 "dhgroup": "null", 00:19:44.525 "digest": "sha384", 00:19:44.525 "state": "completed" 00:19:44.525 }, 00:19:44.525 "cntlid": 53, 00:19:44.525 "listen_address": { 00:19:44.525 "adrfam": "IPv4", 00:19:44.525 "traddr": "10.0.0.2", 00:19:44.525 "trsvcid": "4420", 00:19:44.525 "trtype": "TCP" 00:19:44.525 }, 00:19:44.525 "peer_address": { 00:19:44.525 "adrfam": "IPv4", 00:19:44.525 "traddr": "10.0.0.1", 00:19:44.525 "trsvcid": "50660", 00:19:44.525 "trtype": "TCP" 00:19:44.525 }, 00:19:44.525 "qid": 0, 00:19:44.525 "state": "enabled" 00:19:44.525 } 00:19:44.525 ]' 00:19:44.525 02:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:44.525 02:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.525 02:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:44.526 02:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:44.526 02:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:44.526 02:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.526 02:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.526 02:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.091 02:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:19:45.672 02:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.672 02:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:45.672 02:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.672 02:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.672 02:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.672 02:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:45.672 02:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:45.672 02:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:45.930 02:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:19:45.930 02:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:45.930 02:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:45.930 02:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:45.930 02:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:45.930 02:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:19:45.930 02:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.930 02:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.930 02:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.930 02:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.930 02:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.188 00:19:46.188 02:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:46.188 02:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:46.188 02:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.446 02:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.446 02:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.446 02:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.446 02:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.446 02:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.446 02:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:46.446 { 00:19:46.446 "auth": { 00:19:46.446 "dhgroup": "null", 00:19:46.446 "digest": "sha384", 00:19:46.446 "state": "completed" 00:19:46.446 }, 00:19:46.446 "cntlid": 55, 00:19:46.446 "listen_address": { 00:19:46.446 "adrfam": "IPv4", 00:19:46.446 "traddr": "10.0.0.2", 00:19:46.446 "trsvcid": "4420", 00:19:46.446 "trtype": "TCP" 00:19:46.446 }, 00:19:46.446 "peer_address": { 00:19:46.446 "adrfam": "IPv4", 00:19:46.446 "traddr": "10.0.0.1", 00:19:46.446 "trsvcid": "44348", 00:19:46.446 "trtype": "TCP" 00:19:46.446 }, 00:19:46.446 "qid": 0, 00:19:46.446 "state": "enabled" 00:19:46.446 } 00:19:46.446 ]' 00:19:46.446 02:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:46.704 02:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.704 02:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:46.704 02:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:46.704 02:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:46.704 02:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.704 02:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.704 02:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.962 02:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:19:47.897 02:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.897 02:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:47.897 02:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.897 02:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.897 02:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.897 02:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.897 02:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:47.897 02:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:47.897 02:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.156 02:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:19:48.156 02:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:48.156 02:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.156 02:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:48.156 02:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.156 02:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:19:48.156 02:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.156 02:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.156 02:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.156 02:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:48.156 02:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:48.414 00:19:48.414 02:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:48.414 02:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.414 02:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:48.672 02:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.672 02:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.672 02:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.672 02:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.672 02:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.672 02:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:48.672 { 00:19:48.672 "auth": { 00:19:48.672 "dhgroup": "ffdhe2048", 00:19:48.672 "digest": "sha384", 00:19:48.672 "state": "completed" 00:19:48.672 }, 00:19:48.672 "cntlid": 57, 00:19:48.672 "listen_address": { 00:19:48.672 "adrfam": "IPv4", 00:19:48.672 "traddr": "10.0.0.2", 00:19:48.672 "trsvcid": "4420", 00:19:48.672 "trtype": "TCP" 00:19:48.672 }, 00:19:48.672 "peer_address": { 00:19:48.672 "adrfam": "IPv4", 00:19:48.672 "traddr": "10.0.0.1", 00:19:48.672 "trsvcid": "44374", 00:19:48.672 "trtype": "TCP" 00:19:48.672 }, 00:19:48.672 "qid": 0, 00:19:48.672 "state": "enabled" 00:19:48.672 } 00:19:48.672 ]' 00:19:48.672 02:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:48.672 02:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.672 02:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:48.930 02:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:48.930 02:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:48.930 02:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.930 02:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.930 02:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.189 02:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:19:50.126 02:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.126 02:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:50.126 02:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.126 02:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.126 02:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.126 02:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:50.126 02:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:50.126 02:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:50.384 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:19:50.384 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:50.384 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:50.384 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:50.384 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.384 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:19:50.384 02:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.384 02:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.384 02:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.384 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:50.384 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:50.642 00:19:50.642 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:50.642 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.642 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:50.899 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.899 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.899 02:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.900 02:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.900 02:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.900 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:50.900 { 00:19:50.900 "auth": { 00:19:50.900 "dhgroup": "ffdhe2048", 00:19:50.900 "digest": "sha384", 00:19:50.900 "state": "completed" 00:19:50.900 }, 00:19:50.900 "cntlid": 59, 00:19:50.900 "listen_address": { 00:19:50.900 "adrfam": "IPv4", 00:19:50.900 "traddr": "10.0.0.2", 00:19:50.900 "trsvcid": "4420", 00:19:50.900 "trtype": "TCP" 00:19:50.900 }, 00:19:50.900 "peer_address": { 00:19:50.900 "adrfam": "IPv4", 00:19:50.900 "traddr": "10.0.0.1", 00:19:50.900 "trsvcid": "44394", 00:19:50.900 "trtype": "TCP" 00:19:50.900 }, 00:19:50.900 "qid": 0, 00:19:50.900 "state": "enabled" 00:19:50.900 } 00:19:50.900 ]' 00:19:50.900 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:50.900 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.900 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:51.193 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:51.193 02:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:51.193 02:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.193 02:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.193 02:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.506 02:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:19:52.072 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.072 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:52.072 02:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.072 02:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.072 02:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.072 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:52.072 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:52.072 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:52.640 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:19:52.640 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:52.640 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:52.640 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:52.640 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:52.640 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:19:52.640 02:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.640 02:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.640 02:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.640 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:52.640 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:52.898 00:19:52.898 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:52.898 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:52.898 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.157 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.157 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.157 02:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.157 02:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.157 02:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.157 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:53.157 { 00:19:53.157 "auth": { 00:19:53.157 "dhgroup": "ffdhe2048", 00:19:53.157 "digest": "sha384", 00:19:53.157 "state": "completed" 00:19:53.157 }, 00:19:53.157 "cntlid": 61, 00:19:53.157 "listen_address": { 00:19:53.157 "adrfam": "IPv4", 00:19:53.157 "traddr": "10.0.0.2", 00:19:53.157 "trsvcid": "4420", 00:19:53.157 "trtype": "TCP" 00:19:53.157 }, 00:19:53.157 "peer_address": { 00:19:53.157 "adrfam": "IPv4", 00:19:53.157 "traddr": "10.0.0.1", 00:19:53.157 "trsvcid": "44414", 00:19:53.157 "trtype": "TCP" 00:19:53.157 }, 00:19:53.157 "qid": 0, 00:19:53.157 "state": "enabled" 00:19:53.157 } 00:19:53.157 ]' 00:19:53.157 02:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:53.157 02:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.157 02:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:53.157 02:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:53.157 02:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:53.157 02:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.157 02:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.157 02:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.415 02:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:19:54.351 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.351 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:54.351 02:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.351 02:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.351 02:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.351 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:54.351 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:54.351 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:54.609 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:19:54.609 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:54.609 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:54.609 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:54.609 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:54.609 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:19:54.609 02:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.609 02:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.609 02:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.609 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.609 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.866 00:19:55.124 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:55.124 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:55.124 02:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.382 02:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.382 02:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.382 02:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.382 02:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.382 02:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.382 02:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:55.382 { 00:19:55.382 "auth": { 00:19:55.382 "dhgroup": "ffdhe2048", 00:19:55.382 "digest": "sha384", 00:19:55.382 "state": "completed" 00:19:55.382 }, 00:19:55.382 "cntlid": 63, 00:19:55.382 "listen_address": { 00:19:55.382 "adrfam": "IPv4", 00:19:55.382 "traddr": "10.0.0.2", 00:19:55.382 "trsvcid": "4420", 00:19:55.382 "trtype": "TCP" 00:19:55.382 }, 00:19:55.382 "peer_address": { 00:19:55.382 "adrfam": "IPv4", 00:19:55.382 "traddr": "10.0.0.1", 00:19:55.382 "trsvcid": "53892", 00:19:55.382 "trtype": "TCP" 00:19:55.382 }, 00:19:55.382 "qid": 0, 00:19:55.382 "state": "enabled" 00:19:55.382 } 00:19:55.382 ]' 00:19:55.382 02:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:55.382 02:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.382 02:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:55.382 02:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:55.382 02:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:55.382 02:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.382 02:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.382 02:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.667 02:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:19:56.608 02:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.608 02:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:56.608 02:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.608 02:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.608 02:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.608 02:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.608 02:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:56.608 02:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:56.608 02:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:57.174 02:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:19:57.174 02:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:57.174 02:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:57.174 02:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:57.174 02:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.175 02:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:19:57.175 02:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.175 02:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.175 02:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.175 02:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:57.175 02:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:57.434 00:19:57.434 02:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:57.434 02:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.434 02:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:57.694 02:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.694 02:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.694 02:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.694 02:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.694 02:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.694 02:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:57.694 { 00:19:57.694 "auth": { 00:19:57.694 "dhgroup": "ffdhe3072", 00:19:57.694 "digest": "sha384", 00:19:57.694 "state": "completed" 00:19:57.694 }, 00:19:57.694 "cntlid": 65, 00:19:57.694 "listen_address": { 00:19:57.694 "adrfam": "IPv4", 00:19:57.694 "traddr": "10.0.0.2", 00:19:57.694 "trsvcid": "4420", 00:19:57.694 "trtype": "TCP" 00:19:57.694 }, 00:19:57.694 "peer_address": { 00:19:57.694 "adrfam": "IPv4", 00:19:57.694 "traddr": "10.0.0.1", 00:19:57.694 "trsvcid": "53910", 00:19:57.694 "trtype": "TCP" 00:19:57.694 }, 00:19:57.694 "qid": 0, 00:19:57.694 "state": "enabled" 00:19:57.694 } 00:19:57.694 ]' 00:19:57.694 02:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:57.694 02:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.694 02:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:57.953 02:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:57.953 02:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:57.953 02:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.953 02:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.953 02:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.212 02:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:19:59.149 02:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.149 02:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:19:59.149 02:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.149 02:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.149 02:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.149 02:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:59.149 02:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.149 02:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.409 02:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:19:59.409 02:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:59.409 02:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.409 02:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:59.409 02:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:59.409 02:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:19:59.409 02:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.409 02:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.409 02:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.409 02:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:59.409 02:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:59.976 00:19:59.976 02:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:59.976 02:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:59.976 02:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.235 02:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.235 02:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.235 02:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.235 02:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.235 02:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.235 02:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:00.235 { 00:20:00.235 "auth": { 00:20:00.235 "dhgroup": "ffdhe3072", 00:20:00.235 "digest": "sha384", 00:20:00.235 "state": "completed" 00:20:00.235 }, 00:20:00.235 "cntlid": 67, 00:20:00.235 "listen_address": { 00:20:00.235 "adrfam": "IPv4", 00:20:00.235 "traddr": "10.0.0.2", 00:20:00.235 "trsvcid": "4420", 00:20:00.235 "trtype": "TCP" 00:20:00.235 }, 00:20:00.235 "peer_address": { 00:20:00.235 "adrfam": "IPv4", 00:20:00.235 "traddr": "10.0.0.1", 00:20:00.235 "trsvcid": "53938", 00:20:00.235 "trtype": "TCP" 00:20:00.235 }, 00:20:00.235 "qid": 0, 00:20:00.235 "state": "enabled" 00:20:00.235 } 00:20:00.235 ]' 00:20:00.235 02:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:00.235 02:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.235 02:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:00.235 02:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.235 02:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:00.235 02:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.235 02:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.235 02:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.804 02:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:20:01.385 02:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.385 02:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:01.385 02:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.385 02:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.385 02:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.385 02:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:01.385 02:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:01.385 02:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.016 02:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:20:02.016 02:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:02.016 02:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:02.016 02:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:02.016 02:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:02.016 02:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:20:02.016 02:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.016 02:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.016 02:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.016 02:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:02.016 02:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:02.276 00:20:02.276 02:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:02.276 02:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:02.276 02:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.845 02:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.845 02:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.845 02:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.845 02:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.845 02:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.845 02:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:02.845 { 00:20:02.845 "auth": { 00:20:02.845 "dhgroup": "ffdhe3072", 00:20:02.845 "digest": "sha384", 00:20:02.845 "state": "completed" 00:20:02.845 }, 00:20:02.845 "cntlid": 69, 00:20:02.845 "listen_address": { 00:20:02.845 "adrfam": "IPv4", 00:20:02.845 "traddr": "10.0.0.2", 00:20:02.845 "trsvcid": "4420", 00:20:02.845 "trtype": "TCP" 00:20:02.845 }, 00:20:02.845 "peer_address": { 00:20:02.845 "adrfam": "IPv4", 00:20:02.845 "traddr": "10.0.0.1", 00:20:02.845 "trsvcid": "53954", 00:20:02.845 "trtype": "TCP" 00:20:02.845 }, 00:20:02.845 "qid": 0, 00:20:02.845 "state": "enabled" 00:20:02.845 } 00:20:02.845 ]' 00:20:02.845 02:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:02.845 02:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.845 02:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:02.845 02:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:02.845 02:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:02.845 02:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.845 02:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.845 02:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.103 02:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:20:04.040 02:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.040 02:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:04.040 02:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.040 02:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.040 02:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.040 02:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:04.040 02:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:04.040 02:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:04.300 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:20:04.300 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:04.300 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:04.300 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:04.300 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:04.300 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:20:04.300 02:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.300 02:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.300 02:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.300 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.300 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.866 00:20:04.866 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:04.866 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:04.866 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.124 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.124 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.124 02:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.124 02:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.124 02:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.124 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:05.124 { 00:20:05.124 "auth": { 00:20:05.124 "dhgroup": "ffdhe3072", 00:20:05.124 "digest": "sha384", 00:20:05.124 "state": "completed" 00:20:05.124 }, 00:20:05.124 "cntlid": 71, 00:20:05.124 "listen_address": { 00:20:05.124 "adrfam": "IPv4", 00:20:05.124 "traddr": "10.0.0.2", 00:20:05.124 "trsvcid": "4420", 00:20:05.124 "trtype": "TCP" 00:20:05.124 }, 00:20:05.124 "peer_address": { 00:20:05.124 "adrfam": "IPv4", 00:20:05.124 "traddr": "10.0.0.1", 00:20:05.124 "trsvcid": "35918", 00:20:05.124 "trtype": "TCP" 00:20:05.124 }, 00:20:05.124 "qid": 0, 00:20:05.124 "state": "enabled" 00:20:05.124 } 00:20:05.124 ]' 00:20:05.124 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:05.124 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.124 02:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:05.124 02:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.124 02:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:05.124 02:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.124 02:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.125 02:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.382 02:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:20:06.337 02:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.337 02:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:06.337 02:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.337 02:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.337 02:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.337 02:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.337 02:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:06.337 02:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.337 02:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.900 02:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:20:06.900 02:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:06.900 02:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.900 02:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:06.900 02:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:06.900 02:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:20:06.900 02:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.900 02:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.900 02:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.900 02:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:06.900 02:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:07.158 00:20:07.158 02:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:07.158 02:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:07.158 02:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.415 02:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.415 02:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.415 02:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.415 02:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.415 02:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.415 02:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:07.415 { 00:20:07.415 "auth": { 00:20:07.415 "dhgroup": "ffdhe4096", 00:20:07.415 "digest": "sha384", 00:20:07.415 "state": "completed" 00:20:07.415 }, 00:20:07.415 "cntlid": 73, 00:20:07.415 "listen_address": { 00:20:07.415 "adrfam": "IPv4", 00:20:07.415 "traddr": "10.0.0.2", 00:20:07.415 "trsvcid": "4420", 00:20:07.415 "trtype": "TCP" 00:20:07.415 }, 00:20:07.415 "peer_address": { 00:20:07.415 "adrfam": "IPv4", 00:20:07.415 "traddr": "10.0.0.1", 00:20:07.415 "trsvcid": "35944", 00:20:07.415 "trtype": "TCP" 00:20:07.415 }, 00:20:07.415 "qid": 0, 00:20:07.415 "state": "enabled" 00:20:07.415 } 00:20:07.415 ]' 00:20:07.415 02:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:07.415 02:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.415 02:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:07.673 02:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.673 02:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:07.673 02:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.673 02:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.673 02:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.931 02:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:20:08.864 02:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.864 02:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:08.864 02:20:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.864 02:20:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.864 02:20:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.864 02:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:08.864 02:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:08.864 02:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:09.122 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:20:09.122 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:09.122 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.122 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:09.122 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:09.122 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:20:09.122 02:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.122 02:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.122 02:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.122 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:09.122 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:09.688 00:20:09.688 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:09.688 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:09.689 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.946 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.946 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.946 02:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.946 02:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.946 02:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.946 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:09.946 { 00:20:09.946 "auth": { 00:20:09.946 "dhgroup": "ffdhe4096", 00:20:09.946 "digest": "sha384", 00:20:09.946 "state": "completed" 00:20:09.946 }, 00:20:09.946 "cntlid": 75, 00:20:09.946 "listen_address": { 00:20:09.946 "adrfam": "IPv4", 00:20:09.946 "traddr": "10.0.0.2", 00:20:09.946 "trsvcid": "4420", 00:20:09.946 "trtype": "TCP" 00:20:09.946 }, 00:20:09.946 "peer_address": { 00:20:09.946 "adrfam": "IPv4", 00:20:09.946 "traddr": "10.0.0.1", 00:20:09.946 "trsvcid": "35984", 00:20:09.946 "trtype": "TCP" 00:20:09.946 }, 00:20:09.946 "qid": 0, 00:20:09.946 "state": "enabled" 00:20:09.946 } 00:20:09.946 ]' 00:20:09.946 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:09.946 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.946 02:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:10.203 02:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.203 02:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:10.203 02:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.203 02:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.203 02:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.460 02:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:20:11.395 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.395 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:11.395 02:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.395 02:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.395 02:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.395 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:11.395 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:11.395 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:11.653 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:20:11.653 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:11.653 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.653 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:11.653 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:11.653 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:20:11.653 02:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.653 02:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.653 02:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.653 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:11.653 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:11.910 00:20:12.169 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:12.169 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.169 02:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:12.427 02:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.427 02:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.427 02:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.427 02:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.427 02:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.427 02:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:12.427 { 00:20:12.427 "auth": { 00:20:12.427 "dhgroup": "ffdhe4096", 00:20:12.427 "digest": "sha384", 00:20:12.427 "state": "completed" 00:20:12.427 }, 00:20:12.427 "cntlid": 77, 00:20:12.427 "listen_address": { 00:20:12.427 "adrfam": "IPv4", 00:20:12.427 "traddr": "10.0.0.2", 00:20:12.427 "trsvcid": "4420", 00:20:12.427 "trtype": "TCP" 00:20:12.427 }, 00:20:12.427 "peer_address": { 00:20:12.427 "adrfam": "IPv4", 00:20:12.427 "traddr": "10.0.0.1", 00:20:12.427 "trsvcid": "36004", 00:20:12.427 "trtype": "TCP" 00:20:12.427 }, 00:20:12.427 "qid": 0, 00:20:12.427 "state": "enabled" 00:20:12.427 } 00:20:12.427 ]' 00:20:12.427 02:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:12.427 02:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.427 02:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:12.427 02:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:12.427 02:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:12.427 02:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.427 02:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.427 02:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.993 02:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:20:13.941 02:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.941 02:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:13.941 02:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.941 02:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.942 02:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.942 02:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:13.942 02:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:13.942 02:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:13.942 02:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:20:13.942 02:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:13.942 02:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:13.942 02:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:13.942 02:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:13.942 02:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:20:13.942 02:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.942 02:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.942 02:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.942 02:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.942 02:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.506 00:20:14.506 02:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:14.506 02:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.506 02:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:14.763 02:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.763 02:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.763 02:21:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.763 02:21:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.763 02:21:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.763 02:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:14.763 { 00:20:14.763 "auth": { 00:20:14.763 "dhgroup": "ffdhe4096", 00:20:14.763 "digest": "sha384", 00:20:14.763 "state": "completed" 00:20:14.763 }, 00:20:14.763 "cntlid": 79, 00:20:14.763 "listen_address": { 00:20:14.763 "adrfam": "IPv4", 00:20:14.763 "traddr": "10.0.0.2", 00:20:14.763 "trsvcid": "4420", 00:20:14.763 "trtype": "TCP" 00:20:14.763 }, 00:20:14.763 "peer_address": { 00:20:14.763 "adrfam": "IPv4", 00:20:14.763 "traddr": "10.0.0.1", 00:20:14.763 "trsvcid": "41600", 00:20:14.763 "trtype": "TCP" 00:20:14.763 }, 00:20:14.763 "qid": 0, 00:20:14.763 "state": "enabled" 00:20:14.763 } 00:20:14.763 ]' 00:20:14.763 02:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:15.021 02:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.021 02:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:15.021 02:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:15.021 02:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:15.021 02:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.021 02:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.021 02:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.586 02:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:20:16.519 02:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.519 02:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:16.519 02:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.519 02:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.519 02:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.519 02:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.519 02:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:16.519 02:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:16.519 02:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:16.778 02:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:20:16.778 02:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:16.778 02:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.778 02:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:16.778 02:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:16.778 02:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:20:16.778 02:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.778 02:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.778 02:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.778 02:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:16.778 02:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:17.344 00:20:17.344 02:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:17.344 02:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.344 02:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:17.615 02:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.615 02:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.615 02:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.615 02:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.615 02:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.615 02:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:17.615 { 00:20:17.615 "auth": { 00:20:17.615 "dhgroup": "ffdhe6144", 00:20:17.615 "digest": "sha384", 00:20:17.615 "state": "completed" 00:20:17.615 }, 00:20:17.615 "cntlid": 81, 00:20:17.615 "listen_address": { 00:20:17.615 "adrfam": "IPv4", 00:20:17.615 "traddr": "10.0.0.2", 00:20:17.615 "trsvcid": "4420", 00:20:17.615 "trtype": "TCP" 00:20:17.615 }, 00:20:17.615 "peer_address": { 00:20:17.615 "adrfam": "IPv4", 00:20:17.615 "traddr": "10.0.0.1", 00:20:17.615 "trsvcid": "41622", 00:20:17.615 "trtype": "TCP" 00:20:17.615 }, 00:20:17.615 "qid": 0, 00:20:17.615 "state": "enabled" 00:20:17.615 } 00:20:17.615 ]' 00:20:17.615 02:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:17.877 02:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.877 02:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:17.877 02:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:17.877 02:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:17.877 02:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.877 02:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.877 02:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.135 02:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:20:19.068 02:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.068 02:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:19.068 02:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.068 02:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.068 02:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.068 02:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:19.068 02:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:19.068 02:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:19.325 02:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:20:19.325 02:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:19.325 02:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:19.325 02:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:19.325 02:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:19.325 02:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:20:19.325 02:21:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.325 02:21:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.325 02:21:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.325 02:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:19.325 02:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:19.891 00:20:19.891 02:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:19.891 02:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.891 02:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:20.150 02:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.150 02:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.150 02:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.150 02:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.150 02:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.150 02:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:20.150 { 00:20:20.150 "auth": { 00:20:20.150 "dhgroup": "ffdhe6144", 00:20:20.150 "digest": "sha384", 00:20:20.150 "state": "completed" 00:20:20.150 }, 00:20:20.150 "cntlid": 83, 00:20:20.150 "listen_address": { 00:20:20.150 "adrfam": "IPv4", 00:20:20.150 "traddr": "10.0.0.2", 00:20:20.150 "trsvcid": "4420", 00:20:20.150 "trtype": "TCP" 00:20:20.150 }, 00:20:20.150 "peer_address": { 00:20:20.150 "adrfam": "IPv4", 00:20:20.150 "traddr": "10.0.0.1", 00:20:20.150 "trsvcid": "41646", 00:20:20.150 "trtype": "TCP" 00:20:20.150 }, 00:20:20.150 "qid": 0, 00:20:20.150 "state": "enabled" 00:20:20.150 } 00:20:20.150 ]' 00:20:20.150 02:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:20.150 02:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.150 02:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:20.408 02:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:20.408 02:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:20.408 02:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.408 02:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.408 02:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.666 02:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:20:21.236 02:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.236 02:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:21.236 02:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.236 02:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.236 02:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.493 02:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:21.493 02:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:21.493 02:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:21.751 02:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:20:21.751 02:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:21.751 02:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.751 02:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:21.751 02:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:21.751 02:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:20:21.751 02:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.751 02:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.751 02:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.751 02:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:21.751 02:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:22.008 00:20:22.008 02:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:22.008 02:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:22.008 02:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.574 02:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.574 02:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.574 02:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.574 02:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.574 02:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.574 02:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:22.574 { 00:20:22.574 "auth": { 00:20:22.574 "dhgroup": "ffdhe6144", 00:20:22.574 "digest": "sha384", 00:20:22.574 "state": "completed" 00:20:22.574 }, 00:20:22.574 "cntlid": 85, 00:20:22.574 "listen_address": { 00:20:22.574 "adrfam": "IPv4", 00:20:22.574 "traddr": "10.0.0.2", 00:20:22.574 "trsvcid": "4420", 00:20:22.574 "trtype": "TCP" 00:20:22.574 }, 00:20:22.574 "peer_address": { 00:20:22.574 "adrfam": "IPv4", 00:20:22.574 "traddr": "10.0.0.1", 00:20:22.574 "trsvcid": "41680", 00:20:22.574 "trtype": "TCP" 00:20:22.574 }, 00:20:22.574 "qid": 0, 00:20:22.574 "state": "enabled" 00:20:22.574 } 00:20:22.574 ]' 00:20:22.574 02:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:22.574 02:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.574 02:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:22.574 02:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:22.574 02:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:22.832 02:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.832 02:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.832 02:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.090 02:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:20:24.024 02:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.024 02:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:24.024 02:21:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.024 02:21:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.024 02:21:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.024 02:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:24.024 02:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:24.024 02:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:24.283 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:20:24.283 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:24.283 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:24.283 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:24.283 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:24.283 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:20:24.283 02:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.283 02:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.283 02:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.283 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.283 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.849 00:20:24.849 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:24.849 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.849 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:25.106 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.106 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.106 02:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.106 02:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.106 02:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.106 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:25.106 { 00:20:25.106 "auth": { 00:20:25.106 "dhgroup": "ffdhe6144", 00:20:25.106 "digest": "sha384", 00:20:25.106 "state": "completed" 00:20:25.106 }, 00:20:25.106 "cntlid": 87, 00:20:25.106 "listen_address": { 00:20:25.106 "adrfam": "IPv4", 00:20:25.106 "traddr": "10.0.0.2", 00:20:25.106 "trsvcid": "4420", 00:20:25.106 "trtype": "TCP" 00:20:25.106 }, 00:20:25.106 "peer_address": { 00:20:25.106 "adrfam": "IPv4", 00:20:25.106 "traddr": "10.0.0.1", 00:20:25.106 "trsvcid": "55260", 00:20:25.106 "trtype": "TCP" 00:20:25.106 }, 00:20:25.106 "qid": 0, 00:20:25.106 "state": "enabled" 00:20:25.106 } 00:20:25.106 ]' 00:20:25.106 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:25.107 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.107 02:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:25.107 02:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:25.107 02:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:25.107 02:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.107 02:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.107 02:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.670 02:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:20:26.601 02:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.601 02:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:26.601 02:21:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.601 02:21:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.601 02:21:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.601 02:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.601 02:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:26.601 02:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:26.601 02:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:26.869 02:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:20:26.869 02:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:26.869 02:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:26.869 02:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:26.869 02:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:26.869 02:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:20:26.869 02:21:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.869 02:21:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.869 02:21:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.869 02:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:26.869 02:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:27.449 00:20:27.707 02:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:27.707 02:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.707 02:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:27.966 02:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.966 02:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.966 02:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.966 02:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.966 02:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.966 02:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:27.966 { 00:20:27.966 "auth": { 00:20:27.966 "dhgroup": "ffdhe8192", 00:20:27.966 "digest": "sha384", 00:20:27.966 "state": "completed" 00:20:27.966 }, 00:20:27.966 "cntlid": 89, 00:20:27.966 "listen_address": { 00:20:27.966 "adrfam": "IPv4", 00:20:27.966 "traddr": "10.0.0.2", 00:20:27.966 "trsvcid": "4420", 00:20:27.966 "trtype": "TCP" 00:20:27.966 }, 00:20:27.966 "peer_address": { 00:20:27.966 "adrfam": "IPv4", 00:20:27.966 "traddr": "10.0.0.1", 00:20:27.966 "trsvcid": "55286", 00:20:27.966 "trtype": "TCP" 00:20:27.966 }, 00:20:27.966 "qid": 0, 00:20:27.966 "state": "enabled" 00:20:27.966 } 00:20:27.966 ]' 00:20:27.966 02:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:27.966 02:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.966 02:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:27.966 02:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:27.966 02:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:28.223 02:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.223 02:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.223 02:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.481 02:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:20:29.047 02:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.047 02:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:29.047 02:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.047 02:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.047 02:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.047 02:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:29.047 02:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:29.047 02:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:29.305 02:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:20:29.305 02:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:29.305 02:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.305 02:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:29.305 02:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:29.305 02:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:20:29.305 02:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.305 02:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.305 02:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.305 02:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:29.305 02:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:30.240 00:20:30.240 02:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:30.240 02:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:30.240 02:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.240 02:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.240 02:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.240 02:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.240 02:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.240 02:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.240 02:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:30.240 { 00:20:30.240 "auth": { 00:20:30.240 "dhgroup": "ffdhe8192", 00:20:30.240 "digest": "sha384", 00:20:30.240 "state": "completed" 00:20:30.240 }, 00:20:30.240 "cntlid": 91, 00:20:30.240 "listen_address": { 00:20:30.240 "adrfam": "IPv4", 00:20:30.240 "traddr": "10.0.0.2", 00:20:30.240 "trsvcid": "4420", 00:20:30.240 "trtype": "TCP" 00:20:30.240 }, 00:20:30.240 "peer_address": { 00:20:30.240 "adrfam": "IPv4", 00:20:30.240 "traddr": "10.0.0.1", 00:20:30.240 "trsvcid": "55296", 00:20:30.240 "trtype": "TCP" 00:20:30.240 }, 00:20:30.240 "qid": 0, 00:20:30.240 "state": "enabled" 00:20:30.240 } 00:20:30.240 ]' 00:20:30.240 02:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:30.498 02:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.498 02:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:30.498 02:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:30.498 02:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:30.498 02:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.498 02:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.498 02:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.817 02:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.762 02:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.019 02:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.019 02:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:32.019 02:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:32.586 00:20:32.586 02:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:32.586 02:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.586 02:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:32.844 02:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.844 02:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.844 02:21:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.844 02:21:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.844 02:21:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.844 02:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:32.844 { 00:20:32.844 "auth": { 00:20:32.844 "dhgroup": "ffdhe8192", 00:20:32.844 "digest": "sha384", 00:20:32.844 "state": "completed" 00:20:32.844 }, 00:20:32.844 "cntlid": 93, 00:20:32.844 "listen_address": { 00:20:32.844 "adrfam": "IPv4", 00:20:32.844 "traddr": "10.0.0.2", 00:20:32.844 "trsvcid": "4420", 00:20:32.844 "trtype": "TCP" 00:20:32.844 }, 00:20:32.844 "peer_address": { 00:20:32.844 "adrfam": "IPv4", 00:20:32.844 "traddr": "10.0.0.1", 00:20:32.844 "trsvcid": "55328", 00:20:32.844 "trtype": "TCP" 00:20:32.844 }, 00:20:32.844 "qid": 0, 00:20:32.844 "state": "enabled" 00:20:32.844 } 00:20:32.844 ]' 00:20:32.844 02:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:33.103 02:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.103 02:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:33.103 02:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:33.103 02:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:33.103 02:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.103 02:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.103 02:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.361 02:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:20:33.927 02:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.927 02:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:33.927 02:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.927 02:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.184 02:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.185 02:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:34.185 02:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:34.185 02:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:34.442 02:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:20:34.442 02:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:34.442 02:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.442 02:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:34.442 02:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:34.442 02:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:20:34.442 02:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.442 02:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.442 02:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.442 02:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.442 02:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.032 00:20:35.032 02:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:35.032 02:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:35.032 02:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.289 02:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.289 02:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.289 02:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.289 02:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.289 02:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.289 02:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:35.289 { 00:20:35.289 "auth": { 00:20:35.289 "dhgroup": "ffdhe8192", 00:20:35.289 "digest": "sha384", 00:20:35.289 "state": "completed" 00:20:35.289 }, 00:20:35.289 "cntlid": 95, 00:20:35.289 "listen_address": { 00:20:35.289 "adrfam": "IPv4", 00:20:35.289 "traddr": "10.0.0.2", 00:20:35.289 "trsvcid": "4420", 00:20:35.289 "trtype": "TCP" 00:20:35.289 }, 00:20:35.289 "peer_address": { 00:20:35.289 "adrfam": "IPv4", 00:20:35.289 "traddr": "10.0.0.1", 00:20:35.289 "trsvcid": "59984", 00:20:35.289 "trtype": "TCP" 00:20:35.289 }, 00:20:35.289 "qid": 0, 00:20:35.289 "state": "enabled" 00:20:35.289 } 00:20:35.289 ]' 00:20:35.289 02:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:35.289 02:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.289 02:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:35.547 02:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.547 02:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:35.547 02:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.547 02:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.547 02:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.804 02:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:36.738 02:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:37.301 00:20:37.301 02:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:37.301 02:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.301 02:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:37.302 02:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.302 02:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.302 02:21:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.302 02:21:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.302 02:21:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.558 02:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:37.558 { 00:20:37.558 "auth": { 00:20:37.558 "dhgroup": "null", 00:20:37.558 "digest": "sha512", 00:20:37.558 "state": "completed" 00:20:37.558 }, 00:20:37.558 "cntlid": 97, 00:20:37.558 "listen_address": { 00:20:37.558 "adrfam": "IPv4", 00:20:37.558 "traddr": "10.0.0.2", 00:20:37.558 "trsvcid": "4420", 00:20:37.558 "trtype": "TCP" 00:20:37.558 }, 00:20:37.558 "peer_address": { 00:20:37.558 "adrfam": "IPv4", 00:20:37.558 "traddr": "10.0.0.1", 00:20:37.558 "trsvcid": "60016", 00:20:37.558 "trtype": "TCP" 00:20:37.558 }, 00:20:37.558 "qid": 0, 00:20:37.558 "state": "enabled" 00:20:37.558 } 00:20:37.558 ]' 00:20:37.558 02:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:37.558 02:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.558 02:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:37.558 02:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:37.558 02:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:37.558 02:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.558 02:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.558 02:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.815 02:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:20:38.750 02:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.750 02:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:38.750 02:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.750 02:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.750 02:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.750 02:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:38.750 02:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:38.750 02:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:39.008 02:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:20:39.008 02:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:39.008 02:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:39.008 02:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:39.008 02:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:39.008 02:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:20:39.008 02:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.008 02:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.008 02:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.008 02:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:39.008 02:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:39.265 00:20:39.265 02:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:39.265 02:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:39.265 02:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.523 02:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.523 02:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.523 02:21:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.523 02:21:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.523 02:21:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.523 02:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:39.523 { 00:20:39.523 "auth": { 00:20:39.523 "dhgroup": "null", 00:20:39.523 "digest": "sha512", 00:20:39.523 "state": "completed" 00:20:39.523 }, 00:20:39.523 "cntlid": 99, 00:20:39.523 "listen_address": { 00:20:39.523 "adrfam": "IPv4", 00:20:39.523 "traddr": "10.0.0.2", 00:20:39.523 "trsvcid": "4420", 00:20:39.523 "trtype": "TCP" 00:20:39.523 }, 00:20:39.523 "peer_address": { 00:20:39.523 "adrfam": "IPv4", 00:20:39.523 "traddr": "10.0.0.1", 00:20:39.523 "trsvcid": "60042", 00:20:39.523 "trtype": "TCP" 00:20:39.523 }, 00:20:39.523 "qid": 0, 00:20:39.523 "state": "enabled" 00:20:39.523 } 00:20:39.523 ]' 00:20:39.523 02:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:39.523 02:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.523 02:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:39.523 02:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:39.523 02:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:39.781 02:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.781 02:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.781 02:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.084 02:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:20:40.649 02:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.649 02:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:40.649 02:21:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.649 02:21:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.649 02:21:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.649 02:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:40.649 02:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:40.649 02:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:40.906 02:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:20:40.906 02:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:40.906 02:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.906 02:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:40.906 02:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.906 02:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:20:40.906 02:21:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.906 02:21:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.906 02:21:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.906 02:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:40.906 02:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:41.164 00:20:41.164 02:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:41.164 02:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.164 02:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:41.421 02:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.421 02:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.421 02:21:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.421 02:21:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.421 02:21:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.421 02:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:41.421 { 00:20:41.421 "auth": { 00:20:41.421 "dhgroup": "null", 00:20:41.421 "digest": "sha512", 00:20:41.421 "state": "completed" 00:20:41.421 }, 00:20:41.422 "cntlid": 101, 00:20:41.422 "listen_address": { 00:20:41.422 "adrfam": "IPv4", 00:20:41.422 "traddr": "10.0.0.2", 00:20:41.422 "trsvcid": "4420", 00:20:41.422 "trtype": "TCP" 00:20:41.422 }, 00:20:41.422 "peer_address": { 00:20:41.422 "adrfam": "IPv4", 00:20:41.422 "traddr": "10.0.0.1", 00:20:41.422 "trsvcid": "60060", 00:20:41.422 "trtype": "TCP" 00:20:41.422 }, 00:20:41.422 "qid": 0, 00:20:41.422 "state": "enabled" 00:20:41.422 } 00:20:41.422 ]' 00:20:41.422 02:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:41.681 02:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.681 02:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:41.681 02:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:41.681 02:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:41.681 02:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.681 02:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.681 02:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.938 02:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:20:42.871 02:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.871 02:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:42.871 02:21:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.871 02:21:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.871 02:21:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.871 02:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:42.871 02:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:42.871 02:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:43.129 02:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:20:43.129 02:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:43.129 02:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:43.129 02:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:43.129 02:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:43.129 02:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:20:43.129 02:21:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.129 02:21:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.129 02:21:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.129 02:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.129 02:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.388 00:20:43.388 02:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:43.388 02:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.388 02:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:43.646 02:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.646 02:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.646 02:21:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.646 02:21:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.905 02:21:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.905 02:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:43.905 { 00:20:43.905 "auth": { 00:20:43.905 "dhgroup": "null", 00:20:43.905 "digest": "sha512", 00:20:43.905 "state": "completed" 00:20:43.905 }, 00:20:43.905 "cntlid": 103, 00:20:43.905 "listen_address": { 00:20:43.905 "adrfam": "IPv4", 00:20:43.905 "traddr": "10.0.0.2", 00:20:43.905 "trsvcid": "4420", 00:20:43.905 "trtype": "TCP" 00:20:43.905 }, 00:20:43.905 "peer_address": { 00:20:43.905 "adrfam": "IPv4", 00:20:43.905 "traddr": "10.0.0.1", 00:20:43.905 "trsvcid": "60096", 00:20:43.905 "trtype": "TCP" 00:20:43.905 }, 00:20:43.905 "qid": 0, 00:20:43.905 "state": "enabled" 00:20:43.905 } 00:20:43.905 ]' 00:20:43.905 02:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:43.905 02:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.905 02:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:43.905 02:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:43.905 02:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:43.905 02:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.905 02:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.905 02:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.480 02:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:20:45.045 02:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.045 02:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:45.045 02:21:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.045 02:21:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.045 02:21:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.045 02:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.045 02:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:45.045 02:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:45.045 02:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:45.302 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:20:45.302 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:45.302 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:45.302 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:45.302 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:45.302 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:20:45.302 02:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.302 02:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.302 02:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.302 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:45.302 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:45.865 00:20:45.865 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:45.865 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.865 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:46.122 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.122 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.122 02:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.122 02:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.122 02:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.122 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:46.122 { 00:20:46.123 "auth": { 00:20:46.123 "dhgroup": "ffdhe2048", 00:20:46.123 "digest": "sha512", 00:20:46.123 "state": "completed" 00:20:46.123 }, 00:20:46.123 "cntlid": 105, 00:20:46.123 "listen_address": { 00:20:46.123 "adrfam": "IPv4", 00:20:46.123 "traddr": "10.0.0.2", 00:20:46.123 "trsvcid": "4420", 00:20:46.123 "trtype": "TCP" 00:20:46.123 }, 00:20:46.123 "peer_address": { 00:20:46.123 "adrfam": "IPv4", 00:20:46.123 "traddr": "10.0.0.1", 00:20:46.123 "trsvcid": "38262", 00:20:46.123 "trtype": "TCP" 00:20:46.123 }, 00:20:46.123 "qid": 0, 00:20:46.123 "state": "enabled" 00:20:46.123 } 00:20:46.123 ]' 00:20:46.123 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:46.123 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.123 02:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:46.123 02:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:46.123 02:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:46.123 02:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.123 02:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.123 02:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.688 02:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:20:47.257 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.257 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:47.257 02:21:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.257 02:21:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.257 02:21:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.257 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:47.257 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:47.257 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:47.515 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:20:47.515 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:47.515 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:47.515 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:47.515 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:47.515 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:20:47.515 02:21:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.515 02:21:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.515 02:21:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.515 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:47.515 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:48.084 00:20:48.084 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:48.084 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:48.084 02:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.084 02:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.084 02:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.084 02:21:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.084 02:21:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.084 02:21:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.084 02:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:48.084 { 00:20:48.084 "auth": { 00:20:48.084 "dhgroup": "ffdhe2048", 00:20:48.084 "digest": "sha512", 00:20:48.084 "state": "completed" 00:20:48.084 }, 00:20:48.084 "cntlid": 107, 00:20:48.084 "listen_address": { 00:20:48.084 "adrfam": "IPv4", 00:20:48.084 "traddr": "10.0.0.2", 00:20:48.084 "trsvcid": "4420", 00:20:48.084 "trtype": "TCP" 00:20:48.084 }, 00:20:48.084 "peer_address": { 00:20:48.084 "adrfam": "IPv4", 00:20:48.084 "traddr": "10.0.0.1", 00:20:48.084 "trsvcid": "38278", 00:20:48.085 "trtype": "TCP" 00:20:48.085 }, 00:20:48.085 "qid": 0, 00:20:48.085 "state": "enabled" 00:20:48.085 } 00:20:48.085 ]' 00:20:48.346 02:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:48.346 02:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.346 02:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:48.346 02:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:48.346 02:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:48.346 02:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.346 02:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.346 02:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.603 02:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:49.537 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:50.127 00:20:50.127 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:50.127 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.127 02:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:50.389 02:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.389 02:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.389 02:21:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.389 02:21:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.389 02:21:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.389 02:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:50.389 { 00:20:50.389 "auth": { 00:20:50.389 "dhgroup": "ffdhe2048", 00:20:50.389 "digest": "sha512", 00:20:50.389 "state": "completed" 00:20:50.389 }, 00:20:50.389 "cntlid": 109, 00:20:50.389 "listen_address": { 00:20:50.389 "adrfam": "IPv4", 00:20:50.389 "traddr": "10.0.0.2", 00:20:50.389 "trsvcid": "4420", 00:20:50.389 "trtype": "TCP" 00:20:50.389 }, 00:20:50.389 "peer_address": { 00:20:50.389 "adrfam": "IPv4", 00:20:50.389 "traddr": "10.0.0.1", 00:20:50.389 "trsvcid": "38290", 00:20:50.389 "trtype": "TCP" 00:20:50.389 }, 00:20:50.389 "qid": 0, 00:20:50.389 "state": "enabled" 00:20:50.389 } 00:20:50.389 ]' 00:20:50.389 02:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:50.389 02:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.389 02:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:50.389 02:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:50.389 02:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:50.389 02:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.389 02:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.389 02:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.647 02:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:20:51.581 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.581 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:51.581 02:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.581 02:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.581 02:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.581 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:51.581 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:51.581 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:51.840 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:20:51.840 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:51.840 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:51.840 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:51.840 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:51.840 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:20:51.840 02:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.840 02:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.840 02:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.840 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.840 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.098 00:20:52.098 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:52.098 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.098 02:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:52.357 02:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.357 02:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.357 02:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.357 02:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.357 02:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.357 02:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:52.357 { 00:20:52.357 "auth": { 00:20:52.357 "dhgroup": "ffdhe2048", 00:20:52.357 "digest": "sha512", 00:20:52.357 "state": "completed" 00:20:52.357 }, 00:20:52.357 "cntlid": 111, 00:20:52.357 "listen_address": { 00:20:52.357 "adrfam": "IPv4", 00:20:52.357 "traddr": "10.0.0.2", 00:20:52.357 "trsvcid": "4420", 00:20:52.357 "trtype": "TCP" 00:20:52.357 }, 00:20:52.357 "peer_address": { 00:20:52.357 "adrfam": "IPv4", 00:20:52.357 "traddr": "10.0.0.1", 00:20:52.357 "trsvcid": "38328", 00:20:52.357 "trtype": "TCP" 00:20:52.357 }, 00:20:52.357 "qid": 0, 00:20:52.357 "state": "enabled" 00:20:52.357 } 00:20:52.357 ]' 00:20:52.357 02:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:52.357 02:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.357 02:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:52.357 02:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.357 02:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:52.357 02:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.357 02:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.357 02:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.615 02:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:20:53.550 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.550 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:53.550 02:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.550 02:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.550 02:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.550 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.550 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:53.550 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.550 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.808 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:20:53.808 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:53.808 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:53.808 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:53.808 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.808 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:20:53.808 02:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.808 02:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.808 02:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.808 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:53.808 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:54.066 00:20:54.066 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:54.066 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:54.066 02:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.324 02:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.324 02:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.324 02:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.324 02:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.324 02:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.324 02:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:54.324 { 00:20:54.324 "auth": { 00:20:54.324 "dhgroup": "ffdhe3072", 00:20:54.324 "digest": "sha512", 00:20:54.324 "state": "completed" 00:20:54.324 }, 00:20:54.324 "cntlid": 113, 00:20:54.324 "listen_address": { 00:20:54.324 "adrfam": "IPv4", 00:20:54.324 "traddr": "10.0.0.2", 00:20:54.324 "trsvcid": "4420", 00:20:54.324 "trtype": "TCP" 00:20:54.324 }, 00:20:54.324 "peer_address": { 00:20:54.324 "adrfam": "IPv4", 00:20:54.324 "traddr": "10.0.0.1", 00:20:54.324 "trsvcid": "38340", 00:20:54.324 "trtype": "TCP" 00:20:54.324 }, 00:20:54.324 "qid": 0, 00:20:54.324 "state": "enabled" 00:20:54.324 } 00:20:54.324 ]' 00:20:54.324 02:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:54.324 02:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.324 02:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:54.324 02:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.324 02:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:54.324 02:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.324 02:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.324 02:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.890 02:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:20:55.457 02:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.457 02:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:55.457 02:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.457 02:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.457 02:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.457 02:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:55.457 02:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:55.457 02:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.023 02:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:20:56.023 02:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:56.023 02:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:56.023 02:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:56.023 02:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:56.023 02:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:20:56.023 02:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.023 02:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.023 02:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.023 02:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:56.023 02:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:56.281 00:20:56.281 02:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:56.281 02:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:56.281 02:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.538 02:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.538 02:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.538 02:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.538 02:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.538 02:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.538 02:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:56.538 { 00:20:56.538 "auth": { 00:20:56.538 "dhgroup": "ffdhe3072", 00:20:56.538 "digest": "sha512", 00:20:56.538 "state": "completed" 00:20:56.538 }, 00:20:56.538 "cntlid": 115, 00:20:56.538 "listen_address": { 00:20:56.538 "adrfam": "IPv4", 00:20:56.538 "traddr": "10.0.0.2", 00:20:56.538 "trsvcid": "4420", 00:20:56.538 "trtype": "TCP" 00:20:56.538 }, 00:20:56.538 "peer_address": { 00:20:56.538 "adrfam": "IPv4", 00:20:56.538 "traddr": "10.0.0.1", 00:20:56.538 "trsvcid": "41556", 00:20:56.538 "trtype": "TCP" 00:20:56.538 }, 00:20:56.538 "qid": 0, 00:20:56.538 "state": "enabled" 00:20:56.538 } 00:20:56.538 ]' 00:20:56.538 02:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:56.538 02:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.538 02:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:56.796 02:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.796 02:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:56.796 02:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.796 02:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.796 02:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.054 02:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:20:57.987 02:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.987 02:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:20:57.987 02:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.987 02:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.987 02:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.987 02:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:57.987 02:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:57.987 02:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:58.245 02:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:20:58.245 02:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:58.245 02:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:58.245 02:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:58.246 02:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:58.246 02:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:20:58.246 02:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.246 02:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.246 02:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.246 02:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:58.246 02:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:58.811 00:20:58.811 02:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:58.811 02:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.811 02:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:59.068 02:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.068 02:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.068 02:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.068 02:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.068 02:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.068 02:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:59.068 { 00:20:59.068 "auth": { 00:20:59.068 "dhgroup": "ffdhe3072", 00:20:59.068 "digest": "sha512", 00:20:59.068 "state": "completed" 00:20:59.068 }, 00:20:59.068 "cntlid": 117, 00:20:59.068 "listen_address": { 00:20:59.068 "adrfam": "IPv4", 00:20:59.068 "traddr": "10.0.0.2", 00:20:59.069 "trsvcid": "4420", 00:20:59.069 "trtype": "TCP" 00:20:59.069 }, 00:20:59.069 "peer_address": { 00:20:59.069 "adrfam": "IPv4", 00:20:59.069 "traddr": "10.0.0.1", 00:20:59.069 "trsvcid": "41566", 00:20:59.069 "trtype": "TCP" 00:20:59.069 }, 00:20:59.069 "qid": 0, 00:20:59.069 "state": "enabled" 00:20:59.069 } 00:20:59.069 ]' 00:20:59.069 02:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:59.326 02:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.326 02:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:59.326 02:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.326 02:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:59.326 02:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.326 02:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.326 02:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.891 02:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:21:00.458 02:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.459 02:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:00.459 02:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.459 02:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.459 02:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.459 02:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:00.459 02:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:00.459 02:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:00.715 02:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:21:00.715 02:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:00.715 02:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:00.715 02:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:00.715 02:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:00.715 02:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:21:00.715 02:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.715 02:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.979 02:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.979 02:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.979 02:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.276 00:21:01.276 02:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:01.276 02:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:01.276 02:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.844 02:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.844 02:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.844 02:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.844 02:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.844 02:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.844 02:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:01.844 { 00:21:01.844 "auth": { 00:21:01.844 "dhgroup": "ffdhe3072", 00:21:01.844 "digest": "sha512", 00:21:01.844 "state": "completed" 00:21:01.844 }, 00:21:01.844 "cntlid": 119, 00:21:01.844 "listen_address": { 00:21:01.844 "adrfam": "IPv4", 00:21:01.844 "traddr": "10.0.0.2", 00:21:01.844 "trsvcid": "4420", 00:21:01.844 "trtype": "TCP" 00:21:01.844 }, 00:21:01.844 "peer_address": { 00:21:01.844 "adrfam": "IPv4", 00:21:01.844 "traddr": "10.0.0.1", 00:21:01.844 "trsvcid": "41588", 00:21:01.844 "trtype": "TCP" 00:21:01.844 }, 00:21:01.844 "qid": 0, 00:21:01.844 "state": "enabled" 00:21:01.844 } 00:21:01.844 ]' 00:21:01.844 02:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:01.844 02:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.844 02:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:01.844 02:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:01.844 02:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:01.844 02:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.844 02:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.844 02:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.411 02:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:21:03.344 02:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.344 02:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:03.344 02:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.344 02:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.344 02:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.344 02:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.344 02:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:03.344 02:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:03.344 02:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:03.602 02:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:21:03.602 02:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:03.602 02:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:03.602 02:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:03.602 02:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:03.602 02:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:21:03.602 02:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.602 02:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.602 02:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.602 02:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:03.602 02:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:04.168 00:21:04.168 02:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:04.168 02:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.168 02:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:04.427 02:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.427 02:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.427 02:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.427 02:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.427 02:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.427 02:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:04.427 { 00:21:04.427 "auth": { 00:21:04.427 "dhgroup": "ffdhe4096", 00:21:04.427 "digest": "sha512", 00:21:04.427 "state": "completed" 00:21:04.427 }, 00:21:04.427 "cntlid": 121, 00:21:04.427 "listen_address": { 00:21:04.427 "adrfam": "IPv4", 00:21:04.427 "traddr": "10.0.0.2", 00:21:04.428 "trsvcid": "4420", 00:21:04.428 "trtype": "TCP" 00:21:04.428 }, 00:21:04.428 "peer_address": { 00:21:04.428 "adrfam": "IPv4", 00:21:04.428 "traddr": "10.0.0.1", 00:21:04.428 "trsvcid": "41628", 00:21:04.428 "trtype": "TCP" 00:21:04.428 }, 00:21:04.428 "qid": 0, 00:21:04.428 "state": "enabled" 00:21:04.428 } 00:21:04.428 ]' 00:21:04.428 02:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:04.686 02:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.686 02:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:04.686 02:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.686 02:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:04.686 02:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.686 02:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.686 02:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.253 02:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:21:06.187 02:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.187 02:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:06.187 02:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.187 02:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.187 02:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.187 02:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:06.187 02:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.187 02:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.445 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:21:06.445 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:06.445 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.445 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:06.445 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:06.445 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:21:06.445 02:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.445 02:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.445 02:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.445 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:06.445 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:06.703 00:21:06.703 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:06.703 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:06.703 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.970 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.970 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.970 02:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.970 02:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.970 02:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.970 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:06.970 { 00:21:06.970 "auth": { 00:21:06.970 "dhgroup": "ffdhe4096", 00:21:06.970 "digest": "sha512", 00:21:06.970 "state": "completed" 00:21:06.970 }, 00:21:06.970 "cntlid": 123, 00:21:06.970 "listen_address": { 00:21:06.970 "adrfam": "IPv4", 00:21:06.970 "traddr": "10.0.0.2", 00:21:06.970 "trsvcid": "4420", 00:21:06.970 "trtype": "TCP" 00:21:06.970 }, 00:21:06.970 "peer_address": { 00:21:06.970 "adrfam": "IPv4", 00:21:06.970 "traddr": "10.0.0.1", 00:21:06.970 "trsvcid": "47832", 00:21:06.970 "trtype": "TCP" 00:21:06.970 }, 00:21:06.970 "qid": 0, 00:21:06.970 "state": "enabled" 00:21:06.970 } 00:21:06.970 ]' 00:21:06.970 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:06.970 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.970 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:07.255 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:07.255 02:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:07.255 02:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.255 02:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.255 02:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.513 02:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.445 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:08.446 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:09.012 00:21:09.012 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:09.012 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:09.012 02:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.577 02:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.577 02:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.577 02:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.577 02:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.577 02:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.577 02:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:09.577 { 00:21:09.577 "auth": { 00:21:09.577 "dhgroup": "ffdhe4096", 00:21:09.577 "digest": "sha512", 00:21:09.577 "state": "completed" 00:21:09.577 }, 00:21:09.577 "cntlid": 125, 00:21:09.577 "listen_address": { 00:21:09.577 "adrfam": "IPv4", 00:21:09.577 "traddr": "10.0.0.2", 00:21:09.577 "trsvcid": "4420", 00:21:09.577 "trtype": "TCP" 00:21:09.577 }, 00:21:09.577 "peer_address": { 00:21:09.577 "adrfam": "IPv4", 00:21:09.577 "traddr": "10.0.0.1", 00:21:09.577 "trsvcid": "47854", 00:21:09.577 "trtype": "TCP" 00:21:09.577 }, 00:21:09.577 "qid": 0, 00:21:09.577 "state": "enabled" 00:21:09.577 } 00:21:09.577 ]' 00:21:09.577 02:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:09.577 02:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.577 02:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:09.577 02:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.577 02:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:09.577 02:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.577 02:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.577 02:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.141 02:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:21:10.705 02:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.705 02:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:10.705 02:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.705 02:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.705 02:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.705 02:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:10.705 02:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:10.705 02:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:11.270 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:21:11.270 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:11.270 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.270 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:11.270 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:11.270 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:21:11.270 02:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.270 02:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.270 02:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.270 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.270 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.527 00:21:11.785 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:11.785 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:11.785 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.043 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.043 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.043 02:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.043 02:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.043 02:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.043 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:12.043 { 00:21:12.043 "auth": { 00:21:12.043 "dhgroup": "ffdhe4096", 00:21:12.043 "digest": "sha512", 00:21:12.043 "state": "completed" 00:21:12.043 }, 00:21:12.043 "cntlid": 127, 00:21:12.043 "listen_address": { 00:21:12.043 "adrfam": "IPv4", 00:21:12.043 "traddr": "10.0.0.2", 00:21:12.043 "trsvcid": "4420", 00:21:12.043 "trtype": "TCP" 00:21:12.043 }, 00:21:12.043 "peer_address": { 00:21:12.043 "adrfam": "IPv4", 00:21:12.043 "traddr": "10.0.0.1", 00:21:12.043 "trsvcid": "47874", 00:21:12.043 "trtype": "TCP" 00:21:12.043 }, 00:21:12.043 "qid": 0, 00:21:12.043 "state": "enabled" 00:21:12.043 } 00:21:12.043 ]' 00:21:12.043 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:12.043 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.043 02:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:12.043 02:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:12.043 02:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:12.306 02:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.306 02:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.306 02:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.568 02:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:13.502 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:14.067 00:21:14.067 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:14.067 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.067 02:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:14.324 02:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.324 02:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.324 02:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.324 02:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.324 02:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.324 02:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:14.324 { 00:21:14.324 "auth": { 00:21:14.324 "dhgroup": "ffdhe6144", 00:21:14.324 "digest": "sha512", 00:21:14.324 "state": "completed" 00:21:14.324 }, 00:21:14.324 "cntlid": 129, 00:21:14.324 "listen_address": { 00:21:14.324 "adrfam": "IPv4", 00:21:14.324 "traddr": "10.0.0.2", 00:21:14.324 "trsvcid": "4420", 00:21:14.324 "trtype": "TCP" 00:21:14.324 }, 00:21:14.324 "peer_address": { 00:21:14.324 "adrfam": "IPv4", 00:21:14.324 "traddr": "10.0.0.1", 00:21:14.324 "trsvcid": "47914", 00:21:14.324 "trtype": "TCP" 00:21:14.324 }, 00:21:14.324 "qid": 0, 00:21:14.324 "state": "enabled" 00:21:14.324 } 00:21:14.324 ]' 00:21:14.324 02:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:14.582 02:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.582 02:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:14.582 02:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:14.582 02:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:14.582 02:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.582 02:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.582 02:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.840 02:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:21:15.771 02:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.771 02:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:15.771 02:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.771 02:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.771 02:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.771 02:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:15.771 02:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:15.771 02:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:16.029 02:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:21:16.029 02:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:16.029 02:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.029 02:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:16.029 02:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:16.029 02:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:21:16.029 02:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.029 02:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.029 02:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.029 02:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:16.029 02:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:16.594 00:21:16.594 02:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:16.594 02:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.594 02:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:17.160 02:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.160 02:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.160 02:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.160 02:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.160 02:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.160 02:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:17.160 { 00:21:17.160 "auth": { 00:21:17.161 "dhgroup": "ffdhe6144", 00:21:17.161 "digest": "sha512", 00:21:17.161 "state": "completed" 00:21:17.161 }, 00:21:17.161 "cntlid": 131, 00:21:17.161 "listen_address": { 00:21:17.161 "adrfam": "IPv4", 00:21:17.161 "traddr": "10.0.0.2", 00:21:17.161 "trsvcid": "4420", 00:21:17.161 "trtype": "TCP" 00:21:17.161 }, 00:21:17.161 "peer_address": { 00:21:17.161 "adrfam": "IPv4", 00:21:17.161 "traddr": "10.0.0.1", 00:21:17.161 "trsvcid": "39260", 00:21:17.161 "trtype": "TCP" 00:21:17.161 }, 00:21:17.161 "qid": 0, 00:21:17.161 "state": "enabled" 00:21:17.161 } 00:21:17.161 ]' 00:21:17.161 02:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:17.161 02:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.161 02:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:17.161 02:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:17.161 02:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:17.161 02:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.161 02:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.161 02:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.418 02:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:21:18.359 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.359 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:18.359 02:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.359 02:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.359 02:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.359 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:18.359 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:18.359 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:18.618 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:21:18.618 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:18.618 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:18.618 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:18.618 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:18.618 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:21:18.618 02:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.618 02:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.618 02:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.618 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:18.618 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:19.183 00:21:19.183 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:19.183 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:19.183 02:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.484 02:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.484 02:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.484 02:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.484 02:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.484 02:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.484 02:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:19.484 { 00:21:19.484 "auth": { 00:21:19.484 "dhgroup": "ffdhe6144", 00:21:19.484 "digest": "sha512", 00:21:19.484 "state": "completed" 00:21:19.484 }, 00:21:19.484 "cntlid": 133, 00:21:19.484 "listen_address": { 00:21:19.484 "adrfam": "IPv4", 00:21:19.484 "traddr": "10.0.0.2", 00:21:19.484 "trsvcid": "4420", 00:21:19.484 "trtype": "TCP" 00:21:19.484 }, 00:21:19.484 "peer_address": { 00:21:19.484 "adrfam": "IPv4", 00:21:19.484 "traddr": "10.0.0.1", 00:21:19.484 "trsvcid": "39296", 00:21:19.484 "trtype": "TCP" 00:21:19.484 }, 00:21:19.484 "qid": 0, 00:21:19.484 "state": "enabled" 00:21:19.484 } 00:21:19.484 ]' 00:21:19.484 02:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:19.484 02:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.484 02:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:19.484 02:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:19.484 02:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:19.742 02:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.742 02:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.742 02:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.000 02:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:21:20.934 02:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.934 02:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:20.934 02:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.934 02:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.934 02:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.934 02:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:20.934 02:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:20.934 02:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:21.192 02:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:21:21.192 02:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:21.192 02:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:21.192 02:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:21.192 02:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:21.192 02:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:21:21.192 02:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.192 02:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.192 02:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.192 02:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.192 02:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.451 00:21:21.451 02:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:21.451 02:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:21.451 02:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.018 02:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.018 02:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.018 02:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.018 02:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.018 02:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.018 02:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:22.018 { 00:21:22.018 "auth": { 00:21:22.018 "dhgroup": "ffdhe6144", 00:21:22.018 "digest": "sha512", 00:21:22.018 "state": "completed" 00:21:22.018 }, 00:21:22.018 "cntlid": 135, 00:21:22.018 "listen_address": { 00:21:22.018 "adrfam": "IPv4", 00:21:22.018 "traddr": "10.0.0.2", 00:21:22.018 "trsvcid": "4420", 00:21:22.018 "trtype": "TCP" 00:21:22.018 }, 00:21:22.018 "peer_address": { 00:21:22.018 "adrfam": "IPv4", 00:21:22.018 "traddr": "10.0.0.1", 00:21:22.018 "trsvcid": "39326", 00:21:22.018 "trtype": "TCP" 00:21:22.018 }, 00:21:22.018 "qid": 0, 00:21:22.018 "state": "enabled" 00:21:22.018 } 00:21:22.018 ]' 00:21:22.018 02:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:22.018 02:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.018 02:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:22.018 02:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.018 02:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:22.276 02:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.276 02:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.276 02:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.535 02:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:21:23.101 02:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.101 02:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:23.101 02:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.101 02:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.101 02:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.101 02:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.101 02:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:23.101 02:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:23.101 02:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:23.667 02:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:21:23.667 02:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:23.667 02:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.667 02:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:23.667 02:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:23.667 02:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:21:23.667 02:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.667 02:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.667 02:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.667 02:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:23.667 02:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:24.235 00:21:24.235 02:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:24.235 02:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:24.235 02:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.801 02:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.801 02:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.801 02:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.801 02:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.801 02:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.801 02:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:24.801 { 00:21:24.801 "auth": { 00:21:24.801 "dhgroup": "ffdhe8192", 00:21:24.801 "digest": "sha512", 00:21:24.801 "state": "completed" 00:21:24.801 }, 00:21:24.801 "cntlid": 137, 00:21:24.801 "listen_address": { 00:21:24.801 "adrfam": "IPv4", 00:21:24.801 "traddr": "10.0.0.2", 00:21:24.801 "trsvcid": "4420", 00:21:24.801 "trtype": "TCP" 00:21:24.801 }, 00:21:24.801 "peer_address": { 00:21:24.801 "adrfam": "IPv4", 00:21:24.801 "traddr": "10.0.0.1", 00:21:24.801 "trsvcid": "39358", 00:21:24.801 "trtype": "TCP" 00:21:24.801 }, 00:21:24.801 "qid": 0, 00:21:24.801 "state": "enabled" 00:21:24.801 } 00:21:24.801 ]' 00:21:24.801 02:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:24.801 02:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.801 02:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:24.801 02:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:24.801 02:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:24.801 02:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.801 02:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.801 02:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.060 02:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:25.993 02:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:26.932 00:21:26.932 02:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:26.932 02:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:26.932 02:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.189 02:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.189 02:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.189 02:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.189 02:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.189 02:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.189 02:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:27.189 { 00:21:27.189 "auth": { 00:21:27.189 "dhgroup": "ffdhe8192", 00:21:27.189 "digest": "sha512", 00:21:27.189 "state": "completed" 00:21:27.189 }, 00:21:27.189 "cntlid": 139, 00:21:27.189 "listen_address": { 00:21:27.189 "adrfam": "IPv4", 00:21:27.189 "traddr": "10.0.0.2", 00:21:27.189 "trsvcid": "4420", 00:21:27.189 "trtype": "TCP" 00:21:27.189 }, 00:21:27.189 "peer_address": { 00:21:27.189 "adrfam": "IPv4", 00:21:27.189 "traddr": "10.0.0.1", 00:21:27.189 "trsvcid": "33326", 00:21:27.189 "trtype": "TCP" 00:21:27.189 }, 00:21:27.189 "qid": 0, 00:21:27.189 "state": "enabled" 00:21:27.189 } 00:21:27.189 ]' 00:21:27.189 02:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:27.189 02:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.189 02:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:27.189 02:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:27.189 02:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:27.189 02:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.189 02:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.189 02:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.753 02:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:01:N2ZhMWNhMDllYWZhNDA3YTAyMjk0Y2JkM2I1MzNmMDfjybSN: 00:21:28.684 02:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.684 02:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:28.684 02:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.684 02:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.684 02:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.684 02:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:28.684 02:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:28.684 02:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:28.942 02:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:21:28.942 02:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:28.942 02:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.942 02:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:28.942 02:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.942 02:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key2 00:21:28.942 02:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.942 02:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.942 02:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.942 02:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:28.942 02:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:29.874 00:21:29.874 02:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:29.874 02:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:29.874 02:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.132 02:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.132 02:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.132 02:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.132 02:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.132 02:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.132 02:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:30.132 { 00:21:30.132 "auth": { 00:21:30.132 "dhgroup": "ffdhe8192", 00:21:30.132 "digest": "sha512", 00:21:30.132 "state": "completed" 00:21:30.132 }, 00:21:30.132 "cntlid": 141, 00:21:30.132 "listen_address": { 00:21:30.132 "adrfam": "IPv4", 00:21:30.132 "traddr": "10.0.0.2", 00:21:30.132 "trsvcid": "4420", 00:21:30.132 "trtype": "TCP" 00:21:30.132 }, 00:21:30.132 "peer_address": { 00:21:30.132 "adrfam": "IPv4", 00:21:30.132 "traddr": "10.0.0.1", 00:21:30.132 "trsvcid": "33356", 00:21:30.132 "trtype": "TCP" 00:21:30.132 }, 00:21:30.132 "qid": 0, 00:21:30.132 "state": "enabled" 00:21:30.132 } 00:21:30.132 ]' 00:21:30.132 02:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:30.132 02:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.132 02:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:30.132 02:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.132 02:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:30.132 02:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.132 02:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.132 02:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.390 02:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:02:MDYzODFlNWUxMjRhZGNiZjk0OWE0ZTM1MWE0Y2Q5MzY2MDJhZmRlNjlmZWY4MzI4klrdhw==: 00:21:31.324 02:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.324 02:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:31.324 02:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.324 02:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.324 02:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.324 02:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:31.324 02:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:31.324 02:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:31.583 02:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:21:31.583 02:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:31.583 02:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.583 02:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:31.583 02:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:31.583 02:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key3 00:21:31.583 02:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.583 02:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.583 02:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.583 02:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.583 02:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:32.518 00:21:32.518 02:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:32.518 02:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:32.518 02:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.776 02:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.776 02:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.776 02:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.776 02:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.776 02:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.776 02:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:32.776 { 00:21:32.776 "auth": { 00:21:32.776 "dhgroup": "ffdhe8192", 00:21:32.776 "digest": "sha512", 00:21:32.776 "state": "completed" 00:21:32.776 }, 00:21:32.776 "cntlid": 143, 00:21:32.776 "listen_address": { 00:21:32.776 "adrfam": "IPv4", 00:21:32.776 "traddr": "10.0.0.2", 00:21:32.776 "trsvcid": "4420", 00:21:32.776 "trtype": "TCP" 00:21:32.776 }, 00:21:32.776 "peer_address": { 00:21:32.776 "adrfam": "IPv4", 00:21:32.776 "traddr": "10.0.0.1", 00:21:32.776 "trsvcid": "33396", 00:21:32.776 "trtype": "TCP" 00:21:32.776 }, 00:21:32.776 "qid": 0, 00:21:32.776 "state": "enabled" 00:21:32.776 } 00:21:32.776 ]' 00:21:33.035 02:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:33.035 02:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.035 02:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:33.035 02:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.035 02:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:33.035 02:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.035 02:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.035 02:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.294 02:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:03:MWZhYzcyMzk5ZTk0NzUwMzE2ZDEwZDQ3OTA2ODU2ODVkM2Q5ZjRkYjUzN2M3YWI1ZTI1NmI5OWQ2MzI4ZjAwMLCxWuQ=: 00:21:34.229 02:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.229 02:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:34.229 02:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.229 02:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.229 02:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.229 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:21:34.229 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:21:34.229 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:21:34.229 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.229 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.229 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.487 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:21:34.487 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:34.487 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:34.487 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:34.487 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:34.487 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key0 00:21:34.487 02:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.487 02:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.487 02:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.487 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:34.487 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:35.054 00:21:35.054 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:35.054 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:35.054 02:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.312 02:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.312 02:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.312 02:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.312 02:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.312 02:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.312 02:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:35.312 { 00:21:35.312 "auth": { 00:21:35.312 "dhgroup": "ffdhe8192", 00:21:35.312 "digest": "sha512", 00:21:35.312 "state": "completed" 00:21:35.312 }, 00:21:35.312 "cntlid": 145, 00:21:35.312 "listen_address": { 00:21:35.312 "adrfam": "IPv4", 00:21:35.312 "traddr": "10.0.0.2", 00:21:35.312 "trsvcid": "4420", 00:21:35.312 "trtype": "TCP" 00:21:35.312 }, 00:21:35.312 "peer_address": { 00:21:35.312 "adrfam": "IPv4", 00:21:35.312 "traddr": "10.0.0.1", 00:21:35.312 "trsvcid": "35206", 00:21:35.312 "trtype": "TCP" 00:21:35.312 }, 00:21:35.312 "qid": 0, 00:21:35.312 "state": "enabled" 00:21:35.312 } 00:21:35.312 ]' 00:21:35.312 02:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:35.312 02:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.312 02:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:35.312 02:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.312 02:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:35.571 02:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.571 02:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.571 02:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.829 02:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-secret DHHC-1:00:MmU5MjY1YTczNTBiNWRlMWRhYzMyOWU5MDliNDI2YWQ0NGU2MWI3YjkyNWJlNDZljiVnlg==: 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --dhchap-key key1 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:36.763 02:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:37.329 2024/05/15 02:22:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:21:37.329 request: 00:21:37.329 { 00:21:37.329 "method": "bdev_nvme_attach_controller", 00:21:37.329 "params": { 00:21:37.329 "name": "nvme0", 00:21:37.329 "trtype": "tcp", 00:21:37.329 "traddr": "10.0.0.2", 00:21:37.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d", 00:21:37.329 "adrfam": "ipv4", 00:21:37.329 "trsvcid": "4420", 00:21:37.329 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:37.329 "dhchap_key": "key2" 00:21:37.329 } 00:21:37.329 } 00:21:37.329 Got JSON-RPC error response 00:21:37.329 GoRPCClient: error on JSON-RPC call 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 75073 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 75073 ']' 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 75073 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75073 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:37.329 killing process with pid 75073 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75073' 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 75073 00:21:37.329 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 75073 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:37.588 rmmod nvme_tcp 00:21:37.588 rmmod nvme_fabrics 00:21:37.588 rmmod nvme_keyring 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 75035 ']' 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 75035 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 75035 ']' 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 75035 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:37.588 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75035 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:37.847 killing process with pid 75035 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75035' 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 75035 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 75035 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.gO2 /tmp/spdk.key-sha256.dMo /tmp/spdk.key-sha384.4R3 /tmp/spdk.key-sha512.8Tp /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:21:37.847 00:21:37.847 real 2m58.504s 00:21:37.847 user 7m14.677s 00:21:37.847 sys 0m21.398s 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:37.847 02:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.847 ************************************ 00:21:37.847 END TEST nvmf_auth_target 00:21:37.847 ************************************ 00:21:38.107 02:22:25 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:38.107 02:22:25 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:38.107 02:22:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:21:38.107 02:22:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:38.107 02:22:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:38.107 ************************************ 00:21:38.107 START TEST nvmf_bdevio_no_huge 00:21:38.107 ************************************ 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:38.107 * Looking for test storage... 00:21:38.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.107 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:38.108 02:22:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:38.108 Cannot find device "nvmf_tgt_br" 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:38.108 Cannot find device "nvmf_tgt_br2" 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:38.108 Cannot find device "nvmf_tgt_br" 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:38.108 Cannot find device "nvmf_tgt_br2" 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:38.108 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:38.108 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:38.108 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:38.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:21:38.368 00:21:38.368 --- 10.0.0.2 ping statistics --- 00:21:38.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.368 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:38.368 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:38.368 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:21:38.368 00:21:38.368 --- 10.0.0.3 ping statistics --- 00:21:38.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.368 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:38.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:38.368 00:21:38.368 --- 10.0.0.1 ping statistics --- 00:21:38.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.368 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=79123 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 79123 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 79123 ']' 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:38.368 02:22:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:38.627 [2024-05-15 02:22:26.395675] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:21:38.627 [2024-05-15 02:22:26.395774] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:38.627 [2024-05-15 02:22:26.541162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.884 [2024-05-15 02:22:26.676523] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.885 [2024-05-15 02:22:26.676586] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.885 [2024-05-15 02:22:26.676600] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.885 [2024-05-15 02:22:26.676610] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.885 [2024-05-15 02:22:26.676618] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.885 [2024-05-15 02:22:26.676795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:38.885 [2024-05-15 02:22:26.676882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:38.885 [2024-05-15 02:22:26.676940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:38.885 [2024-05-15 02:22:26.676943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:39.819 [2024-05-15 02:22:27.556979] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:39.819 Malloc0 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:39.819 [2024-05-15 02:22:27.594471] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:39.819 [2024-05-15 02:22:27.594845] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.819 { 00:21:39.819 "params": { 00:21:39.819 "name": "Nvme$subsystem", 00:21:39.819 "trtype": "$TEST_TRANSPORT", 00:21:39.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.819 "adrfam": "ipv4", 00:21:39.819 "trsvcid": "$NVMF_PORT", 00:21:39.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.819 "hdgst": ${hdgst:-false}, 00:21:39.819 "ddgst": ${ddgst:-false} 00:21:39.819 }, 00:21:39.819 "method": "bdev_nvme_attach_controller" 00:21:39.819 } 00:21:39.819 EOF 00:21:39.819 )") 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:39.819 02:22:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:39.819 "params": { 00:21:39.819 "name": "Nvme1", 00:21:39.819 "trtype": "tcp", 00:21:39.819 "traddr": "10.0.0.2", 00:21:39.819 "adrfam": "ipv4", 00:21:39.819 "trsvcid": "4420", 00:21:39.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.819 "hdgst": false, 00:21:39.819 "ddgst": false 00:21:39.819 }, 00:21:39.819 "method": "bdev_nvme_attach_controller" 00:21:39.819 }' 00:21:39.820 [2024-05-15 02:22:27.644167] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:21:39.820 [2024-05-15 02:22:27.644268] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid79171 ] 00:21:39.820 [2024-05-15 02:22:27.780823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:40.078 [2024-05-15 02:22:27.951278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.078 [2024-05-15 02:22:27.954414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.078 [2024-05-15 02:22:27.954430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.336 I/O targets: 00:21:40.336 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:40.336 00:21:40.336 00:21:40.336 CUnit - A unit testing framework for C - Version 2.1-3 00:21:40.336 http://cunit.sourceforge.net/ 00:21:40.336 00:21:40.336 00:21:40.336 Suite: bdevio tests on: Nvme1n1 00:21:40.336 Test: blockdev write read block ...passed 00:21:40.336 Test: blockdev write zeroes read block ...passed 00:21:40.336 Test: blockdev write zeroes read no split ...passed 00:21:40.336 Test: blockdev write zeroes read split ...passed 00:21:40.336 Test: blockdev write zeroes read split partial ...passed 00:21:40.336 Test: blockdev reset ...[2024-05-15 02:22:28.288030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:40.336 [2024-05-15 02:22:28.288190] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f0360 (9): Bad file descriptor 00:21:40.336 [2024-05-15 02:22:28.308368] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:40.336 passed 00:21:40.336 Test: blockdev write read 8 blocks ...passed 00:21:40.336 Test: blockdev write read size > 128k ...passed 00:21:40.336 Test: blockdev write read invalid size ...passed 00:21:40.594 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:40.594 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:40.594 Test: blockdev write read max offset ...passed 00:21:40.594 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:40.594 Test: blockdev writev readv 8 blocks ...passed 00:21:40.594 Test: blockdev writev readv 30 x 1block ...passed 00:21:40.594 Test: blockdev writev readv block ...passed 00:21:40.594 Test: blockdev writev readv size > 128k ...passed 00:21:40.594 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:40.594 Test: blockdev comparev and writev ...[2024-05-15 02:22:28.484054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:40.594 [2024-05-15 02:22:28.484137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.594 [2024-05-15 02:22:28.484171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:40.594 [2024-05-15 02:22:28.484190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.594 [2024-05-15 02:22:28.484693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:40.594 [2024-05-15 02:22:28.484750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:40.594 [2024-05-15 02:22:28.484783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:40.594 [2024-05-15 02:22:28.484803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:40.594 [2024-05-15 02:22:28.485280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:40.594 [2024-05-15 02:22:28.485339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:40.594 [2024-05-15 02:22:28.485382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:40.594 [2024-05-15 02:22:28.485420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:40.594 [2024-05-15 02:22:28.485866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:40.594 [2024-05-15 02:22:28.485917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:40.594 [2024-05-15 02:22:28.485948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:40.594 [2024-05-15 02:22:28.485967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:40.594 passed 00:21:40.594 Test: blockdev nvme passthru rw ...passed 00:21:40.594 Test: blockdev nvme passthru vendor specific ...[2024-05-15 02:22:28.568820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:40.594 [2024-05-15 02:22:28.568878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:40.594 [2024-05-15 02:22:28.569014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:40.594 [2024-05-15 02:22:28.569044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:40.594 [2024-05-15 02:22:28.569165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:40.594 [2024-05-15 02:22:28.569182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:40.594 passed 00:21:40.594 Test: blockdev nvme admin passthru ...[2024-05-15 02:22:28.569295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:40.594 [2024-05-15 02:22:28.569319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:40.594 passed 00:21:40.852 Test: blockdev copy ...passed 00:21:40.852 00:21:40.852 Run Summary: Type Total Ran Passed Failed Inactive 00:21:40.852 suites 1 1 n/a 0 0 00:21:40.852 tests 23 23 23 0 0 00:21:40.852 asserts 152 152 152 0 n/a 00:21:40.852 00:21:40.852 Elapsed time = 0.935 seconds 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:41.420 rmmod nvme_tcp 00:21:41.420 rmmod nvme_fabrics 00:21:41.420 rmmod nvme_keyring 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 79123 ']' 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 79123 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 79123 ']' 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 79123 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79123 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:21:41.420 killing process with pid 79123 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79123' 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 79123 00:21:41.420 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 79123 00:21:41.420 [2024-05-15 02:22:29.349149] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:41.986 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:41.986 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:41.986 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:41.986 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:41.986 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:41.986 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.986 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.986 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.986 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:41.986 00:21:41.986 real 0m3.879s 00:21:41.986 user 0m14.543s 00:21:41.986 sys 0m1.472s 00:21:41.986 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:41.986 02:22:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.986 ************************************ 00:21:41.986 END TEST nvmf_bdevio_no_huge 00:21:41.986 ************************************ 00:21:41.986 02:22:29 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:41.986 02:22:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:41.986 02:22:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:41.986 02:22:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:41.986 ************************************ 00:21:41.986 START TEST nvmf_tls 00:21:41.986 ************************************ 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:41.986 * Looking for test storage... 00:21:41.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.986 02:22:29 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:41.987 Cannot find device "nvmf_tgt_br" 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:41.987 Cannot find device "nvmf_tgt_br2" 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:41.987 Cannot find device "nvmf_tgt_br" 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:41.987 Cannot find device "nvmf_tgt_br2" 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:21:41.987 02:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:42.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:42.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:42.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:21:42.245 00:21:42.245 --- 10.0.0.2 ping statistics --- 00:21:42.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.245 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:42.245 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:42.245 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:21:42.245 00:21:42.245 --- 10.0.0.3 ping statistics --- 00:21:42.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.245 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:42.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:21:42.245 00:21:42.245 --- 10.0.0.1 ping statistics --- 00:21:42.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.245 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:42.245 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:42.502 02:22:30 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:42.502 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.502 02:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:42.502 02:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.502 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=79347 00:21:42.502 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:42.502 02:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 79347 00:21:42.502 02:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 79347 ']' 00:21:42.502 02:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.502 02:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:42.502 02:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.502 02:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:42.502 02:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.502 [2024-05-15 02:22:30.317366] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:21:42.502 [2024-05-15 02:22:30.317465] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.502 [2024-05-15 02:22:30.457916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.760 [2024-05-15 02:22:30.539199] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.760 [2024-05-15 02:22:30.539285] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.760 [2024-05-15 02:22:30.539306] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.760 [2024-05-15 02:22:30.539322] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.760 [2024-05-15 02:22:30.539337] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.760 [2024-05-15 02:22:30.539405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.345 02:22:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:43.345 02:22:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:43.345 02:22:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:43.345 02:22:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.345 02:22:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.603 02:22:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.603 02:22:31 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:43.603 02:22:31 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:43.861 true 00:21:43.861 02:22:31 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:43.861 02:22:31 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:44.119 02:22:31 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:44.119 02:22:31 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:44.119 02:22:31 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:44.376 02:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:44.376 02:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:44.634 02:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:44.634 02:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:44.634 02:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:45.199 02:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:45.199 02:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:45.456 02:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:45.456 02:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:45.456 02:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:45.456 02:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:45.712 02:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:45.713 02:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:45.713 02:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:45.970 02:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:45.970 02:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:46.227 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:46.227 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:46.227 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:46.484 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:46.484 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:46.811 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:46.811 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:46.811 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:46.811 02:22:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:46.811 02:22:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:46.811 02:22:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:46.811 02:22:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:46.811 02:22:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:46.811 02:22:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.3RLjMnm26r 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.lPyGNlY60H 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.3RLjMnm26r 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.lPyGNlY60H 00:21:47.070 02:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:47.329 02:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:47.587 02:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.3RLjMnm26r 00:21:47.587 02:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3RLjMnm26r 00:21:47.587 02:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:47.846 [2024-05-15 02:22:35.821085] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.846 02:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:48.105 02:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:48.363 [2024-05-15 02:22:36.345305] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:48.363 [2024-05-15 02:22:36.345454] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:48.363 [2024-05-15 02:22:36.345645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.363 02:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:48.622 malloc0 00:21:48.622 02:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:48.881 02:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3RLjMnm26r 00:21:49.140 [2024-05-15 02:22:37.100198] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:49.140 02:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3RLjMnm26r 00:22:01.347 Initializing NVMe Controllers 00:22:01.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:01.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:01.347 Initialization complete. Launching workers. 00:22:01.347 ======================================================== 00:22:01.347 Latency(us) 00:22:01.347 Device Information : IOPS MiB/s Average min max 00:22:01.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8477.50 33.12 7551.39 1092.46 11884.29 00:22:01.347 ======================================================== 00:22:01.347 Total : 8477.50 33.12 7551.39 1092.46 11884.29 00:22:01.347 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3RLjMnm26r 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3RLjMnm26r' 00:22:01.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=79609 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 79609 /var/tmp/bdevperf.sock 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 79609 ']' 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:01.347 02:22:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.347 [2024-05-15 02:22:47.379588] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:01.347 [2024-05-15 02:22:47.379728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79609 ] 00:22:01.347 [2024-05-15 02:22:47.522581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.347 [2024-05-15 02:22:47.582265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.347 02:22:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:01.347 02:22:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:01.347 02:22:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3RLjMnm26r 00:22:01.347 [2024-05-15 02:22:48.516331] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:01.347 [2024-05-15 02:22:48.516484] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:01.347 TLSTESTn1 00:22:01.347 02:22:48 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:01.347 Running I/O for 10 seconds... 00:22:11.323 00:22:11.323 Latency(us) 00:22:11.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.323 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:11.323 Verification LBA range: start 0x0 length 0x2000 00:22:11.323 TLSTESTn1 : 10.04 3436.02 13.42 0.00 0.00 37140.26 9472.93 37176.79 00:22:11.323 =================================================================================================================== 00:22:11.323 Total : 3436.02 13.42 0.00 0.00 37140.26 9472.93 37176.79 00:22:11.323 0 00:22:11.323 02:22:58 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:11.323 02:22:58 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 79609 00:22:11.323 02:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 79609 ']' 00:22:11.323 02:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 79609 00:22:11.323 02:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:11.323 02:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:11.323 02:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79609 00:22:11.323 killing process with pid 79609 00:22:11.323 Received shutdown signal, test time was about 10.000000 seconds 00:22:11.323 00:22:11.323 Latency(us) 00:22:11.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.323 =================================================================================================================== 00:22:11.323 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:11.323 02:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:11.323 02:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:11.323 02:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79609' 00:22:11.323 02:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 79609 00:22:11.323 [2024-05-15 02:22:58.808064] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:11.323 02:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 79609 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lPyGNlY60H 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lPyGNlY60H 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:11.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lPyGNlY60H 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lPyGNlY60H' 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=79689 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 79689 /var/tmp/bdevperf.sock 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 79689 ']' 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:11.323 02:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.323 [2024-05-15 02:22:59.077993] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:11.323 [2024-05-15 02:22:59.078381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79689 ] 00:22:11.323 [2024-05-15 02:22:59.220289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.323 [2024-05-15 02:22:59.311451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.259 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:12.259 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:12.259 02:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lPyGNlY60H 00:22:12.517 [2024-05-15 02:23:00.415456] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:12.517 [2024-05-15 02:23:00.415571] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:12.517 [2024-05-15 02:23:00.420685] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:12.517 [2024-05-15 02:23:00.421206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x869a40 (107): Transport endpoint is not connected 00:22:12.517 [2024-05-15 02:23:00.422192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x869a40 (9): Bad file descriptor 00:22:12.517 [2024-05-15 02:23:00.423187] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:12.517 [2024-05-15 02:23:00.423213] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:12.517 [2024-05-15 02:23:00.423224] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:12.517 2024/05/15 02:23:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.lPyGNlY60H subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:22:12.517 request: 00:22:12.517 { 00:22:12.517 "method": "bdev_nvme_attach_controller", 00:22:12.517 "params": { 00:22:12.517 "name": "TLSTEST", 00:22:12.517 "trtype": "tcp", 00:22:12.517 "traddr": "10.0.0.2", 00:22:12.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.517 "adrfam": "ipv4", 00:22:12.517 "trsvcid": "4420", 00:22:12.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.517 "psk": "/tmp/tmp.lPyGNlY60H" 00:22:12.517 } 00:22:12.517 } 00:22:12.517 Got JSON-RPC error response 00:22:12.517 GoRPCClient: error on JSON-RPC call 00:22:12.517 02:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 79689 00:22:12.517 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 79689 ']' 00:22:12.517 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 79689 00:22:12.517 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:12.517 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:12.517 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79689 00:22:12.517 killing process with pid 79689 00:22:12.517 Received shutdown signal, test time was about 10.000000 seconds 00:22:12.517 00:22:12.517 Latency(us) 00:22:12.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.517 =================================================================================================================== 00:22:12.517 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:12.517 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:12.517 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:12.517 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79689' 00:22:12.517 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 79689 00:22:12.517 [2024-05-15 02:23:00.475820] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:12.517 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 79689 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3RLjMnm26r 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3RLjMnm26r 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3RLjMnm26r 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3RLjMnm26r' 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=79723 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 79723 /var/tmp/bdevperf.sock 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 79723 ']' 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:12.776 02:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.776 [2024-05-15 02:23:00.737159] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:12.776 [2024-05-15 02:23:00.737248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79723 ] 00:22:13.033 [2024-05-15 02:23:00.870941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.033 [2024-05-15 02:23:00.930435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.291 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:13.291 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:13.291 02:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.3RLjMnm26r 00:22:13.549 [2024-05-15 02:23:01.458609] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.549 [2024-05-15 02:23:01.458726] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:13.549 [2024-05-15 02:23:01.467971] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:13.549 [2024-05-15 02:23:01.468025] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:13.549 [2024-05-15 02:23:01.468098] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:13.549 [2024-05-15 02:23:01.468310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcfa40 (107): Transport endpoint is not connected 00:22:13.549 [2024-05-15 02:23:01.469294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcfa40 (9): Bad file descriptor 00:22:13.549 [2024-05-15 02:23:01.470290] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:13.549 [2024-05-15 02:23:01.470319] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:13.550 [2024-05-15 02:23:01.470330] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:13.550 2024/05/15 02:23:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.3RLjMnm26r subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:22:13.550 request: 00:22:13.550 { 00:22:13.550 "method": "bdev_nvme_attach_controller", 00:22:13.550 "params": { 00:22:13.550 "name": "TLSTEST", 00:22:13.550 "trtype": "tcp", 00:22:13.550 "traddr": "10.0.0.2", 00:22:13.550 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:13.550 "adrfam": "ipv4", 00:22:13.550 "trsvcid": "4420", 00:22:13.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.550 "psk": "/tmp/tmp.3RLjMnm26r" 00:22:13.550 } 00:22:13.550 } 00:22:13.550 Got JSON-RPC error response 00:22:13.550 GoRPCClient: error on JSON-RPC call 00:22:13.550 02:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 79723 00:22:13.550 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 79723 ']' 00:22:13.550 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 79723 00:22:13.550 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:13.550 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:13.550 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79723 00:22:13.550 killing process with pid 79723 00:22:13.550 Received shutdown signal, test time was about 10.000000 seconds 00:22:13.550 00:22:13.550 Latency(us) 00:22:13.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.550 =================================================================================================================== 00:22:13.550 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:13.550 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:13.550 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:13.550 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79723' 00:22:13.550 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 79723 00:22:13.550 [2024-05-15 02:23:01.517986] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:13.550 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 79723 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3RLjMnm26r 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3RLjMnm26r 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3RLjMnm26r 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3RLjMnm26r' 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=79749 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 79749 /var/tmp/bdevperf.sock 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 79749 ']' 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:13.808 02:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.808 [2024-05-15 02:23:01.755804] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:13.808 [2024-05-15 02:23:01.755910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79749 ] 00:22:14.066 [2024-05-15 02:23:01.887232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.066 [2024-05-15 02:23:01.973330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.066 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:14.066 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:14.066 02:23:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3RLjMnm26r 00:22:14.325 [2024-05-15 02:23:02.304488] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.325 [2024-05-15 02:23:02.304648] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:14.325 [2024-05-15 02:23:02.310241] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:14.325 [2024-05-15 02:23:02.310301] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:14.325 [2024-05-15 02:23:02.310382] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:14.325 [2024-05-15 02:23:02.310931] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12efa40 (107): Transport endpoint is not connected 00:22:14.325 [2024-05-15 02:23:02.311913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12efa40 (9): Bad file descriptor 00:22:14.325 [2024-05-15 02:23:02.312910] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:14.325 [2024-05-15 02:23:02.312946] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:14.325 [2024-05-15 02:23:02.312961] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:14.325 2024/05/15 02:23:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.3RLjMnm26r subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:22:14.325 request: 00:22:14.325 { 00:22:14.325 "method": "bdev_nvme_attach_controller", 00:22:14.325 "params": { 00:22:14.325 "name": "TLSTEST", 00:22:14.325 "trtype": "tcp", 00:22:14.325 "traddr": "10.0.0.2", 00:22:14.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:14.325 "adrfam": "ipv4", 00:22:14.325 "trsvcid": "4420", 00:22:14.325 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:14.325 "psk": "/tmp/tmp.3RLjMnm26r" 00:22:14.325 } 00:22:14.325 } 00:22:14.325 Got JSON-RPC error response 00:22:14.325 GoRPCClient: error on JSON-RPC call 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 79749 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 79749 ']' 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 79749 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79749 00:22:14.583 killing process with pid 79749 00:22:14.583 Received shutdown signal, test time was about 10.000000 seconds 00:22:14.583 00:22:14.583 Latency(us) 00:22:14.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.583 =================================================================================================================== 00:22:14.583 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79749' 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 79749 00:22:14.583 [2024-05-15 02:23:02.367068] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 79749 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:14.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=79774 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 79774 /var/tmp/bdevperf.sock 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 79774 ']' 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:14.583 02:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.841 [2024-05-15 02:23:02.625316] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:14.841 [2024-05-15 02:23:02.625722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79774 ] 00:22:14.841 [2024-05-15 02:23:02.769907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.841 [2024-05-15 02:23:02.852815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.790 02:23:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:15.790 02:23:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:15.790 02:23:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:16.047 [2024-05-15 02:23:03.939535] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:16.048 [2024-05-15 02:23:03.941448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9a00 (9): Bad file descriptor 00:22:16.048 [2024-05-15 02:23:03.942443] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:16.048 [2024-05-15 02:23:03.942467] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:16.048 [2024-05-15 02:23:03.942478] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:16.048 2024/05/15 02:23:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:22:16.048 request: 00:22:16.048 { 00:22:16.048 "method": "bdev_nvme_attach_controller", 00:22:16.048 "params": { 00:22:16.048 "name": "TLSTEST", 00:22:16.048 "trtype": "tcp", 00:22:16.048 "traddr": "10.0.0.2", 00:22:16.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:16.048 "adrfam": "ipv4", 00:22:16.048 "trsvcid": "4420", 00:22:16.048 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:22:16.048 } 00:22:16.048 } 00:22:16.048 Got JSON-RPC error response 00:22:16.048 GoRPCClient: error on JSON-RPC call 00:22:16.048 02:23:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 79774 00:22:16.048 02:23:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 79774 ']' 00:22:16.048 02:23:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 79774 00:22:16.048 02:23:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:16.048 02:23:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:16.048 02:23:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79774 00:22:16.048 killing process with pid 79774 00:22:16.048 Received shutdown signal, test time was about 10.000000 seconds 00:22:16.048 00:22:16.048 Latency(us) 00:22:16.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.048 =================================================================================================================== 00:22:16.048 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:16.048 02:23:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:16.048 02:23:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:16.048 02:23:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79774' 00:22:16.048 02:23:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 79774 00:22:16.048 02:23:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 79774 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 79347 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 79347 ']' 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 79347 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79347 00:22:16.306 killing process with pid 79347 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79347' 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 79347 00:22:16.306 [2024-05-15 02:23:04.193339] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:16.306 [2024-05-15 02:23:04.193405] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:16.306 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 79347 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.y9j5LVBeaS 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.y9j5LVBeaS 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=79819 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 79819 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 79819 ']' 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:16.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:16.565 02:23:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.565 [2024-05-15 02:23:04.518167] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:16.565 [2024-05-15 02:23:04.518261] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.822 [2024-05-15 02:23:04.655304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.822 [2024-05-15 02:23:04.713948] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.822 [2024-05-15 02:23:04.714004] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.822 [2024-05-15 02:23:04.714016] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.822 [2024-05-15 02:23:04.714024] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.822 [2024-05-15 02:23:04.714031] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.822 [2024-05-15 02:23:04.714063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.754 02:23:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:17.754 02:23:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:17.754 02:23:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.754 02:23:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.754 02:23:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.754 02:23:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.754 02:23:05 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.y9j5LVBeaS 00:22:17.754 02:23:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.y9j5LVBeaS 00:22:17.754 02:23:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:17.754 [2024-05-15 02:23:05.730612] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.754 02:23:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:18.012 02:23:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:18.269 [2024-05-15 02:23:06.210683] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:18.269 [2024-05-15 02:23:06.210829] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:18.269 [2024-05-15 02:23:06.211015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.269 02:23:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:18.527 malloc0 00:22:18.527 02:23:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:19.093 02:23:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.y9j5LVBeaS 00:22:19.093 [2024-05-15 02:23:07.062557] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y9j5LVBeaS 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.y9j5LVBeaS' 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=79904 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 79904 /var/tmp/bdevperf.sock 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 79904 ']' 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:19.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:19.093 02:23:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.353 [2024-05-15 02:23:07.139874] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:19.353 [2024-05-15 02:23:07.139968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79904 ] 00:22:19.353 [2024-05-15 02:23:07.272101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.353 [2024-05-15 02:23:07.331799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.612 02:23:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:19.612 02:23:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:19.612 02:23:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.y9j5LVBeaS 00:22:19.870 [2024-05-15 02:23:07.655711] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:19.870 [2024-05-15 02:23:07.655858] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:19.870 TLSTESTn1 00:22:19.870 02:23:07 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:19.870 Running I/O for 10 seconds... 00:22:32.088 00:22:32.088 Latency(us) 00:22:32.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.088 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:32.088 Verification LBA range: start 0x0 length 0x2000 00:22:32.088 TLSTESTn1 : 10.02 3789.70 14.80 0.00 0.00 33708.95 7030.23 30265.72 00:22:32.088 =================================================================================================================== 00:22:32.088 Total : 3789.70 14.80 0.00 0.00 33708.95 7030.23 30265.72 00:22:32.088 0 00:22:32.088 02:23:17 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.088 02:23:17 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 79904 00:22:32.088 02:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 79904 ']' 00:22:32.088 02:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 79904 00:22:32.089 02:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:32.089 02:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:32.089 02:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79904 00:22:32.089 02:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:32.089 02:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:32.089 02:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79904' 00:22:32.089 killing process with pid 79904 00:22:32.089 02:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 79904 00:22:32.089 Received shutdown signal, test time was about 10.000000 seconds 00:22:32.089 00:22:32.089 Latency(us) 00:22:32.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.089 =================================================================================================================== 00:22:32.089 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.089 [2024-05-15 02:23:17.950704] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:32.089 02:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 79904 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.y9j5LVBeaS 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y9j5LVBeaS 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y9j5LVBeaS 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y9j5LVBeaS 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.y9j5LVBeaS' 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=79977 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 79977 /var/tmp/bdevperf.sock 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 79977 ']' 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.089 [2024-05-15 02:23:18.205989] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:32.089 [2024-05-15 02:23:18.206080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79977 ] 00:22:32.089 [2024-05-15 02:23:18.343019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.089 [2024-05-15 02:23:18.403698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.y9j5LVBeaS 00:22:32.089 [2024-05-15 02:23:18.750968] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.089 [2024-05-15 02:23:18.751053] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:32.089 [2024-05-15 02:23:18.751071] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.y9j5LVBeaS 00:22:32.089 2024/05/15 02:23:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.y9j5LVBeaS subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:22:32.089 request: 00:22:32.089 { 00:22:32.089 "method": "bdev_nvme_attach_controller", 00:22:32.089 "params": { 00:22:32.089 "name": "TLSTEST", 00:22:32.089 "trtype": "tcp", 00:22:32.089 "traddr": "10.0.0.2", 00:22:32.089 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.089 "adrfam": "ipv4", 00:22:32.089 "trsvcid": "4420", 00:22:32.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.089 "psk": "/tmp/tmp.y9j5LVBeaS" 00:22:32.089 } 00:22:32.089 } 00:22:32.089 Got JSON-RPC error response 00:22:32.089 GoRPCClient: error on JSON-RPC call 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 79977 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 79977 ']' 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 79977 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79977 00:22:32.089 killing process with pid 79977 00:22:32.089 Received shutdown signal, test time was about 10.000000 seconds 00:22:32.089 00:22:32.089 Latency(us) 00:22:32.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.089 =================================================================================================================== 00:22:32.089 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79977' 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 79977 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 79977 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 79819 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 79819 ']' 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 79819 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:32.089 02:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79819 00:22:32.089 killing process with pid 79819 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79819' 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 79819 00:22:32.089 [2024-05-15 02:23:19.005617] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:32.089 [2024-05-15 02:23:19.005667] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 79819 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=80008 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 80008 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 80008 ']' 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:32.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.089 [2024-05-15 02:23:19.264703] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:32.089 [2024-05-15 02:23:19.264804] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.089 [2024-05-15 02:23:19.404539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.089 [2024-05-15 02:23:19.463909] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.089 [2024-05-15 02:23:19.463963] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.089 [2024-05-15 02:23:19.463974] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.089 [2024-05-15 02:23:19.463982] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.089 [2024-05-15 02:23:19.463989] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.089 [2024-05-15 02:23:19.464022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.y9j5LVBeaS 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.y9j5LVBeaS 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.y9j5LVBeaS 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.y9j5LVBeaS 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:32.089 [2024-05-15 02:23:19.865158] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.089 02:23:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:32.351 02:23:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:32.608 [2024-05-15 02:23:20.393222] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:32.608 [2024-05-15 02:23:20.393335] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:32.608 [2024-05-15 02:23:20.393541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.608 02:23:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:32.867 malloc0 00:22:32.867 02:23:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:33.127 02:23:21 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.y9j5LVBeaS 00:22:33.386 [2024-05-15 02:23:21.268160] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:33.386 [2024-05-15 02:23:21.268209] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:33.386 [2024-05-15 02:23:21.268243] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:33.386 2024/05/15 02:23:21 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.y9j5LVBeaS], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:22:33.386 request: 00:22:33.386 { 00:22:33.386 "method": "nvmf_subsystem_add_host", 00:22:33.386 "params": { 00:22:33.386 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.386 "host": "nqn.2016-06.io.spdk:host1", 00:22:33.386 "psk": "/tmp/tmp.y9j5LVBeaS" 00:22:33.386 } 00:22:33.386 } 00:22:33.386 Got JSON-RPC error response 00:22:33.386 GoRPCClient: error on JSON-RPC call 00:22:33.386 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:33.386 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.386 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.386 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.386 02:23:21 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 80008 00:22:33.386 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 80008 ']' 00:22:33.386 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 80008 00:22:33.386 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:33.386 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:33.386 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80008 00:22:33.386 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:33.386 killing process with pid 80008 00:22:33.386 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:33.386 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80008' 00:22:33.386 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 80008 00:22:33.386 [2024-05-15 02:23:21.311234] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:33.386 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 80008 00:22:33.645 02:23:21 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.y9j5LVBeaS 00:22:33.645 02:23:21 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:33.645 02:23:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:33.645 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:33.645 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.645 02:23:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=80093 00:22:33.645 02:23:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 80093 00:22:33.645 02:23:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:33.645 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 80093 ']' 00:22:33.645 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.645 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:33.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.646 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.646 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:33.646 02:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.646 [2024-05-15 02:23:21.561064] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:33.646 [2024-05-15 02:23:21.561166] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.905 [2024-05-15 02:23:21.694975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.905 [2024-05-15 02:23:21.751828] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.905 [2024-05-15 02:23:21.751880] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.905 [2024-05-15 02:23:21.751892] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.905 [2024-05-15 02:23:21.751901] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.905 [2024-05-15 02:23:21.751909] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.905 [2024-05-15 02:23:21.751933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.840 02:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:34.840 02:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:34.840 02:23:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.840 02:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.840 02:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.840 02:23:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.840 02:23:22 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.y9j5LVBeaS 00:22:34.840 02:23:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.y9j5LVBeaS 00:22:34.840 02:23:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:35.098 [2024-05-15 02:23:22.878331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.098 02:23:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:35.357 02:23:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:35.635 [2024-05-15 02:23:23.462435] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:35.635 [2024-05-15 02:23:23.462539] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:35.635 [2024-05-15 02:23:23.462709] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.635 02:23:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:35.892 malloc0 00:22:35.892 02:23:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:36.150 02:23:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.y9j5LVBeaS 00:22:36.408 [2024-05-15 02:23:24.269122] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:36.408 02:23:24 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=80181 00:22:36.408 02:23:24 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.408 02:23:24 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 80181 /var/tmp/bdevperf.sock 00:22:36.408 02:23:24 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.408 02:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 80181 ']' 00:22:36.408 02:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.408 02:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:36.408 02:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.408 02:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:36.408 02:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.408 [2024-05-15 02:23:24.342620] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:36.408 [2024-05-15 02:23:24.342710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80181 ] 00:22:36.666 [2024-05-15 02:23:24.477773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.666 [2024-05-15 02:23:24.551621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.666 02:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:36.666 02:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:36.666 02:23:24 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.y9j5LVBeaS 00:22:36.924 [2024-05-15 02:23:24.854862] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:36.925 [2024-05-15 02:23:24.854973] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:36.925 TLSTESTn1 00:22:37.183 02:23:24 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:37.475 02:23:25 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:37.475 "subsystems": [ 00:22:37.475 { 00:22:37.475 "subsystem": "keyring", 00:22:37.475 "config": [] 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "subsystem": "iobuf", 00:22:37.475 "config": [ 00:22:37.475 { 00:22:37.475 "method": "iobuf_set_options", 00:22:37.475 "params": { 00:22:37.475 "large_bufsize": 135168, 00:22:37.475 "large_pool_count": 1024, 00:22:37.475 "small_bufsize": 8192, 00:22:37.475 "small_pool_count": 8192 00:22:37.475 } 00:22:37.475 } 00:22:37.475 ] 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "subsystem": "sock", 00:22:37.475 "config": [ 00:22:37.475 { 00:22:37.475 "method": "sock_impl_set_options", 00:22:37.475 "params": { 00:22:37.475 "enable_ktls": false, 00:22:37.475 "enable_placement_id": 0, 00:22:37.475 "enable_quickack": false, 00:22:37.475 "enable_recv_pipe": true, 00:22:37.475 "enable_zerocopy_send_client": false, 00:22:37.475 "enable_zerocopy_send_server": true, 00:22:37.475 "impl_name": "posix", 00:22:37.475 "recv_buf_size": 2097152, 00:22:37.475 "send_buf_size": 2097152, 00:22:37.475 "tls_version": 0, 00:22:37.475 "zerocopy_threshold": 0 00:22:37.475 } 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "method": "sock_impl_set_options", 00:22:37.475 "params": { 00:22:37.475 "enable_ktls": false, 00:22:37.475 "enable_placement_id": 0, 00:22:37.475 "enable_quickack": false, 00:22:37.475 "enable_recv_pipe": true, 00:22:37.475 "enable_zerocopy_send_client": false, 00:22:37.475 "enable_zerocopy_send_server": true, 00:22:37.475 "impl_name": "ssl", 00:22:37.475 "recv_buf_size": 4096, 00:22:37.475 "send_buf_size": 4096, 00:22:37.475 "tls_version": 0, 00:22:37.475 "zerocopy_threshold": 0 00:22:37.475 } 00:22:37.475 } 00:22:37.475 ] 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "subsystem": "vmd", 00:22:37.475 "config": [] 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "subsystem": "accel", 00:22:37.475 "config": [ 00:22:37.475 { 00:22:37.475 "method": "accel_set_options", 00:22:37.475 "params": { 00:22:37.475 "buf_count": 2048, 00:22:37.475 "large_cache_size": 16, 00:22:37.475 "sequence_count": 2048, 00:22:37.475 "small_cache_size": 128, 00:22:37.475 "task_count": 2048 00:22:37.475 } 00:22:37.475 } 00:22:37.475 ] 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "subsystem": "bdev", 00:22:37.475 "config": [ 00:22:37.475 { 00:22:37.475 "method": "bdev_set_options", 00:22:37.475 "params": { 00:22:37.475 "bdev_auto_examine": true, 00:22:37.475 "bdev_io_cache_size": 256, 00:22:37.475 "bdev_io_pool_size": 65535, 00:22:37.475 "iobuf_large_cache_size": 16, 00:22:37.475 "iobuf_small_cache_size": 128 00:22:37.475 } 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "method": "bdev_raid_set_options", 00:22:37.475 "params": { 00:22:37.475 "process_window_size_kb": 1024 00:22:37.475 } 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "method": "bdev_iscsi_set_options", 00:22:37.475 "params": { 00:22:37.475 "timeout_sec": 30 00:22:37.475 } 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "method": "bdev_nvme_set_options", 00:22:37.475 "params": { 00:22:37.475 "action_on_timeout": "none", 00:22:37.475 "allow_accel_sequence": false, 00:22:37.475 "arbitration_burst": 0, 00:22:37.475 "bdev_retry_count": 3, 00:22:37.475 "ctrlr_loss_timeout_sec": 0, 00:22:37.475 "delay_cmd_submit": true, 00:22:37.475 "dhchap_dhgroups": [ 00:22:37.475 "null", 00:22:37.475 "ffdhe2048", 00:22:37.475 "ffdhe3072", 00:22:37.475 "ffdhe4096", 00:22:37.475 "ffdhe6144", 00:22:37.475 "ffdhe8192" 00:22:37.475 ], 00:22:37.475 "dhchap_digests": [ 00:22:37.475 "sha256", 00:22:37.475 "sha384", 00:22:37.475 "sha512" 00:22:37.475 ], 00:22:37.475 "disable_auto_failback": false, 00:22:37.475 "fast_io_fail_timeout_sec": 0, 00:22:37.475 "generate_uuids": false, 00:22:37.475 "high_priority_weight": 0, 00:22:37.475 "io_path_stat": false, 00:22:37.475 "io_queue_requests": 0, 00:22:37.475 "keep_alive_timeout_ms": 10000, 00:22:37.475 "low_priority_weight": 0, 00:22:37.475 "medium_priority_weight": 0, 00:22:37.475 "nvme_adminq_poll_period_us": 10000, 00:22:37.475 "nvme_error_stat": false, 00:22:37.475 "nvme_ioq_poll_period_us": 0, 00:22:37.475 "rdma_cm_event_timeout_ms": 0, 00:22:37.475 "rdma_max_cq_size": 0, 00:22:37.475 "rdma_srq_size": 0, 00:22:37.475 "reconnect_delay_sec": 0, 00:22:37.475 "timeout_admin_us": 0, 00:22:37.475 "timeout_us": 0, 00:22:37.475 "transport_ack_timeout": 0, 00:22:37.475 "transport_retry_count": 4, 00:22:37.475 "transport_tos": 0 00:22:37.475 } 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "method": "bdev_nvme_set_hotplug", 00:22:37.475 "params": { 00:22:37.475 "enable": false, 00:22:37.475 "period_us": 100000 00:22:37.475 } 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "method": "bdev_malloc_create", 00:22:37.475 "params": { 00:22:37.475 "block_size": 4096, 00:22:37.475 "name": "malloc0", 00:22:37.475 "num_blocks": 8192, 00:22:37.475 "optimal_io_boundary": 0, 00:22:37.475 "physical_block_size": 4096, 00:22:37.475 "uuid": "e4f02bf3-7476-499b-ae14-d62beac61820" 00:22:37.475 } 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "method": "bdev_wait_for_examine" 00:22:37.475 } 00:22:37.475 ] 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "subsystem": "nbd", 00:22:37.475 "config": [] 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "subsystem": "scheduler", 00:22:37.475 "config": [ 00:22:37.475 { 00:22:37.475 "method": "framework_set_scheduler", 00:22:37.475 "params": { 00:22:37.475 "name": "static" 00:22:37.475 } 00:22:37.475 } 00:22:37.475 ] 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "subsystem": "nvmf", 00:22:37.475 "config": [ 00:22:37.475 { 00:22:37.475 "method": "nvmf_set_config", 00:22:37.475 "params": { 00:22:37.475 "admin_cmd_passthru": { 00:22:37.475 "identify_ctrlr": false 00:22:37.475 }, 00:22:37.475 "discovery_filter": "match_any" 00:22:37.475 } 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "method": "nvmf_set_max_subsystems", 00:22:37.475 "params": { 00:22:37.475 "max_subsystems": 1024 00:22:37.475 } 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "method": "nvmf_set_crdt", 00:22:37.475 "params": { 00:22:37.475 "crdt1": 0, 00:22:37.475 "crdt2": 0, 00:22:37.475 "crdt3": 0 00:22:37.475 } 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "method": "nvmf_create_transport", 00:22:37.475 "params": { 00:22:37.475 "abort_timeout_sec": 1, 00:22:37.475 "ack_timeout": 0, 00:22:37.475 "buf_cache_size": 4294967295, 00:22:37.475 "c2h_success": false, 00:22:37.475 "data_wr_pool_size": 0, 00:22:37.475 "dif_insert_or_strip": false, 00:22:37.475 "in_capsule_data_size": 4096, 00:22:37.475 "io_unit_size": 131072, 00:22:37.475 "max_aq_depth": 128, 00:22:37.475 "max_io_qpairs_per_ctrlr": 127, 00:22:37.475 "max_io_size": 131072, 00:22:37.475 "max_queue_depth": 128, 00:22:37.475 "num_shared_buffers": 511, 00:22:37.475 "sock_priority": 0, 00:22:37.475 "trtype": "TCP", 00:22:37.475 "zcopy": false 00:22:37.475 } 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "method": "nvmf_create_subsystem", 00:22:37.475 "params": { 00:22:37.475 "allow_any_host": false, 00:22:37.475 "ana_reporting": false, 00:22:37.475 "max_cntlid": 65519, 00:22:37.475 "max_namespaces": 10, 00:22:37.475 "min_cntlid": 1, 00:22:37.475 "model_number": "SPDK bdev Controller", 00:22:37.475 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.475 "serial_number": "SPDK00000000000001" 00:22:37.475 } 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "method": "nvmf_subsystem_add_host", 00:22:37.475 "params": { 00:22:37.475 "host": "nqn.2016-06.io.spdk:host1", 00:22:37.475 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.475 "psk": "/tmp/tmp.y9j5LVBeaS" 00:22:37.475 } 00:22:37.475 }, 00:22:37.475 { 00:22:37.475 "method": "nvmf_subsystem_add_ns", 00:22:37.475 "params": { 00:22:37.476 "namespace": { 00:22:37.476 "bdev_name": "malloc0", 00:22:37.476 "nguid": "E4F02BF37476499BAE14D62BEAC61820", 00:22:37.476 "no_auto_visible": false, 00:22:37.476 "nsid": 1, 00:22:37.476 "uuid": "e4f02bf3-7476-499b-ae14-d62beac61820" 00:22:37.476 }, 00:22:37.476 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:37.476 } 00:22:37.476 }, 00:22:37.476 { 00:22:37.476 "method": "nvmf_subsystem_add_listener", 00:22:37.476 "params": { 00:22:37.476 "listen_address": { 00:22:37.476 "adrfam": "IPv4", 00:22:37.476 "traddr": "10.0.0.2", 00:22:37.476 "trsvcid": "4420", 00:22:37.476 "trtype": "TCP" 00:22:37.476 }, 00:22:37.476 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.476 "secure_channel": true 00:22:37.476 } 00:22:37.476 } 00:22:37.476 ] 00:22:37.476 } 00:22:37.476 ] 00:22:37.476 }' 00:22:37.476 02:23:25 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:37.760 02:23:25 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:37.760 "subsystems": [ 00:22:37.760 { 00:22:37.760 "subsystem": "keyring", 00:22:37.760 "config": [] 00:22:37.760 }, 00:22:37.760 { 00:22:37.760 "subsystem": "iobuf", 00:22:37.760 "config": [ 00:22:37.761 { 00:22:37.761 "method": "iobuf_set_options", 00:22:37.761 "params": { 00:22:37.761 "large_bufsize": 135168, 00:22:37.761 "large_pool_count": 1024, 00:22:37.761 "small_bufsize": 8192, 00:22:37.761 "small_pool_count": 8192 00:22:37.761 } 00:22:37.761 } 00:22:37.761 ] 00:22:37.761 }, 00:22:37.761 { 00:22:37.761 "subsystem": "sock", 00:22:37.761 "config": [ 00:22:37.761 { 00:22:37.761 "method": "sock_impl_set_options", 00:22:37.761 "params": { 00:22:37.761 "enable_ktls": false, 00:22:37.761 "enable_placement_id": 0, 00:22:37.761 "enable_quickack": false, 00:22:37.761 "enable_recv_pipe": true, 00:22:37.761 "enable_zerocopy_send_client": false, 00:22:37.761 "enable_zerocopy_send_server": true, 00:22:37.761 "impl_name": "posix", 00:22:37.761 "recv_buf_size": 2097152, 00:22:37.761 "send_buf_size": 2097152, 00:22:37.761 "tls_version": 0, 00:22:37.761 "zerocopy_threshold": 0 00:22:37.761 } 00:22:37.761 }, 00:22:37.761 { 00:22:37.761 "method": "sock_impl_set_options", 00:22:37.761 "params": { 00:22:37.761 "enable_ktls": false, 00:22:37.761 "enable_placement_id": 0, 00:22:37.761 "enable_quickack": false, 00:22:37.761 "enable_recv_pipe": true, 00:22:37.761 "enable_zerocopy_send_client": false, 00:22:37.761 "enable_zerocopy_send_server": true, 00:22:37.761 "impl_name": "ssl", 00:22:37.761 "recv_buf_size": 4096, 00:22:37.761 "send_buf_size": 4096, 00:22:37.761 "tls_version": 0, 00:22:37.761 "zerocopy_threshold": 0 00:22:37.761 } 00:22:37.761 } 00:22:37.761 ] 00:22:37.761 }, 00:22:37.761 { 00:22:37.761 "subsystem": "vmd", 00:22:37.761 "config": [] 00:22:37.761 }, 00:22:37.761 { 00:22:37.761 "subsystem": "accel", 00:22:37.761 "config": [ 00:22:37.761 { 00:22:37.761 "method": "accel_set_options", 00:22:37.761 "params": { 00:22:37.761 "buf_count": 2048, 00:22:37.761 "large_cache_size": 16, 00:22:37.761 "sequence_count": 2048, 00:22:37.761 "small_cache_size": 128, 00:22:37.761 "task_count": 2048 00:22:37.761 } 00:22:37.761 } 00:22:37.761 ] 00:22:37.761 }, 00:22:37.761 { 00:22:37.761 "subsystem": "bdev", 00:22:37.761 "config": [ 00:22:37.761 { 00:22:37.761 "method": "bdev_set_options", 00:22:37.761 "params": { 00:22:37.761 "bdev_auto_examine": true, 00:22:37.761 "bdev_io_cache_size": 256, 00:22:37.761 "bdev_io_pool_size": 65535, 00:22:37.761 "iobuf_large_cache_size": 16, 00:22:37.761 "iobuf_small_cache_size": 128 00:22:37.761 } 00:22:37.761 }, 00:22:37.761 { 00:22:37.761 "method": "bdev_raid_set_options", 00:22:37.761 "params": { 00:22:37.761 "process_window_size_kb": 1024 00:22:37.761 } 00:22:37.761 }, 00:22:37.761 { 00:22:37.761 "method": "bdev_iscsi_set_options", 00:22:37.761 "params": { 00:22:37.761 "timeout_sec": 30 00:22:37.761 } 00:22:37.761 }, 00:22:37.761 { 00:22:37.761 "method": "bdev_nvme_set_options", 00:22:37.761 "params": { 00:22:37.761 "action_on_timeout": "none", 00:22:37.761 "allow_accel_sequence": false, 00:22:37.761 "arbitration_burst": 0, 00:22:37.761 "bdev_retry_count": 3, 00:22:37.761 "ctrlr_loss_timeout_sec": 0, 00:22:37.761 "delay_cmd_submit": true, 00:22:37.761 "dhchap_dhgroups": [ 00:22:37.761 "null", 00:22:37.761 "ffdhe2048", 00:22:37.761 "ffdhe3072", 00:22:37.761 "ffdhe4096", 00:22:37.761 "ffdhe6144", 00:22:37.761 "ffdhe8192" 00:22:37.761 ], 00:22:37.761 "dhchap_digests": [ 00:22:37.761 "sha256", 00:22:37.761 "sha384", 00:22:37.761 "sha512" 00:22:37.761 ], 00:22:37.761 "disable_auto_failback": false, 00:22:37.761 "fast_io_fail_timeout_sec": 0, 00:22:37.761 "generate_uuids": false, 00:22:37.761 "high_priority_weight": 0, 00:22:37.761 "io_path_stat": false, 00:22:37.761 "io_queue_requests": 512, 00:22:37.761 "keep_alive_timeout_ms": 10000, 00:22:37.761 "low_priority_weight": 0, 00:22:37.761 "medium_priority_weight": 0, 00:22:37.761 "nvme_adminq_poll_period_us": 10000, 00:22:37.761 "nvme_error_stat": false, 00:22:37.761 "nvme_ioq_poll_period_us": 0, 00:22:37.761 "rdma_cm_event_timeout_ms": 0, 00:22:37.761 "rdma_max_cq_size": 0, 00:22:37.761 "rdma_srq_size": 0, 00:22:37.761 "reconnect_delay_sec": 0, 00:22:37.761 "timeout_admin_us": 0, 00:22:37.761 "timeout_us": 0, 00:22:37.761 "transport_ack_timeout": 0, 00:22:37.761 "transport_retry_count": 4, 00:22:37.761 "transport_tos": 0 00:22:37.761 } 00:22:37.761 }, 00:22:37.761 { 00:22:37.761 "method": "bdev_nvme_attach_controller", 00:22:37.761 "params": { 00:22:37.761 "adrfam": "IPv4", 00:22:37.761 "ctrlr_loss_timeout_sec": 0, 00:22:37.761 "ddgst": false, 00:22:37.761 "fast_io_fail_timeout_sec": 0, 00:22:37.761 "hdgst": false, 00:22:37.761 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:37.761 "name": "TLSTEST", 00:22:37.761 "prchk_guard": false, 00:22:37.761 "prchk_reftag": false, 00:22:37.761 "psk": "/tmp/tmp.y9j5LVBeaS", 00:22:37.761 "reconnect_delay_sec": 0, 00:22:37.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.761 "traddr": "10.0.0.2", 00:22:37.761 "trsvcid": "4420", 00:22:37.761 "trtype": "TCP" 00:22:37.761 } 00:22:37.761 }, 00:22:37.761 { 00:22:37.761 "method": "bdev_nvme_set_hotplug", 00:22:37.761 "params": { 00:22:37.761 "enable": false, 00:22:37.761 "period_us": 100000 00:22:37.761 } 00:22:37.761 }, 00:22:37.761 { 00:22:37.761 "method": "bdev_wait_for_examine" 00:22:37.761 } 00:22:37.761 ] 00:22:37.761 }, 00:22:37.761 { 00:22:37.761 "subsystem": "nbd", 00:22:37.761 "config": [] 00:22:37.761 } 00:22:37.761 ] 00:22:37.761 }' 00:22:37.761 02:23:25 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 80181 00:22:37.761 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 80181 ']' 00:22:37.761 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 80181 00:22:37.761 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:37.761 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:37.761 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80181 00:22:37.761 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:37.761 killing process with pid 80181 00:22:37.761 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:37.761 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80181' 00:22:37.761 Received shutdown signal, test time was about 10.000000 seconds 00:22:37.761 00:22:37.761 Latency(us) 00:22:37.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.761 =================================================================================================================== 00:22:37.761 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:37.761 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 80181 00:22:37.761 [2024-05-15 02:23:25.629445] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:37.762 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 80181 00:22:38.021 02:23:25 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 80093 00:22:38.021 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 80093 ']' 00:22:38.021 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 80093 00:22:38.021 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:38.021 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:38.021 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80093 00:22:38.021 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:38.021 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:38.021 killing process with pid 80093 00:22:38.021 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80093' 00:22:38.021 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 80093 00:22:38.021 [2024-05-15 02:23:25.842818] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:38.021 [2024-05-15 02:23:25.842859] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:38.021 02:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 80093 00:22:38.021 02:23:26 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:38.021 02:23:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.021 02:23:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:38.021 02:23:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.021 02:23:26 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:38.021 "subsystems": [ 00:22:38.021 { 00:22:38.021 "subsystem": "keyring", 00:22:38.021 "config": [] 00:22:38.021 }, 00:22:38.021 { 00:22:38.021 "subsystem": "iobuf", 00:22:38.021 "config": [ 00:22:38.021 { 00:22:38.021 "method": "iobuf_set_options", 00:22:38.021 "params": { 00:22:38.021 "large_bufsize": 135168, 00:22:38.021 "large_pool_count": 1024, 00:22:38.021 "small_bufsize": 8192, 00:22:38.021 "small_pool_count": 8192 00:22:38.021 } 00:22:38.021 } 00:22:38.021 ] 00:22:38.021 }, 00:22:38.021 { 00:22:38.021 "subsystem": "sock", 00:22:38.021 "config": [ 00:22:38.021 { 00:22:38.021 "method": "sock_impl_set_options", 00:22:38.021 "params": { 00:22:38.021 "enable_ktls": false, 00:22:38.021 "enable_placement_id": 0, 00:22:38.021 "enable_quickack": false, 00:22:38.021 "enable_recv_pipe": true, 00:22:38.021 "enable_zerocopy_send_client": false, 00:22:38.021 "enable_zerocopy_send_server": true, 00:22:38.021 "impl_name": "posix", 00:22:38.021 "recv_buf_size": 2097152, 00:22:38.021 "send_buf_size": 2097152, 00:22:38.021 "tls_version": 0, 00:22:38.021 "zerocopy_threshold": 0 00:22:38.021 } 00:22:38.021 }, 00:22:38.021 { 00:22:38.021 "method": "sock_impl_set_options", 00:22:38.021 "params": { 00:22:38.021 "enable_ktls": false, 00:22:38.021 "enable_placement_id": 0, 00:22:38.021 "enable_quickack": false, 00:22:38.021 "enable_recv_pipe": true, 00:22:38.021 "enable_zerocopy_send_client": false, 00:22:38.021 "enable_zerocopy_send_server": true, 00:22:38.021 "impl_name": "ssl", 00:22:38.021 "recv_buf_size": 4096, 00:22:38.021 "send_buf_size": 4096, 00:22:38.021 "tls_version": 0, 00:22:38.021 "zerocopy_threshold": 0 00:22:38.021 } 00:22:38.021 } 00:22:38.021 ] 00:22:38.021 }, 00:22:38.021 { 00:22:38.021 "subsystem": "vmd", 00:22:38.021 "config": [] 00:22:38.021 }, 00:22:38.021 { 00:22:38.021 "subsystem": "accel", 00:22:38.021 "config": [ 00:22:38.021 { 00:22:38.021 "method": "accel_set_options", 00:22:38.021 "params": { 00:22:38.021 "buf_count": 2048, 00:22:38.021 "large_cache_size": 16, 00:22:38.021 "sequence_count": 2048, 00:22:38.021 "small_cache_size": 128, 00:22:38.021 "task_count": 2048 00:22:38.021 } 00:22:38.021 } 00:22:38.021 ] 00:22:38.021 }, 00:22:38.021 { 00:22:38.021 "subsystem": "bdev", 00:22:38.021 "config": [ 00:22:38.021 { 00:22:38.021 "method": "bdev_set_options", 00:22:38.021 "params": { 00:22:38.021 "bdev_auto_examine": true, 00:22:38.021 "bdev_io_cache_size": 256, 00:22:38.021 "bdev_io_pool_size": 65535, 00:22:38.021 "iobuf_large_cache_size": 16, 00:22:38.021 "iobuf_small_cache_size": 128 00:22:38.021 } 00:22:38.021 }, 00:22:38.021 { 00:22:38.021 "method": "bdev_raid_set_options", 00:22:38.021 "params": { 00:22:38.021 "process_window_size_kb": 1024 00:22:38.021 } 00:22:38.021 }, 00:22:38.021 { 00:22:38.021 "method": "bdev_iscsi_set_options", 00:22:38.021 "params": { 00:22:38.021 "timeout_sec": 30 00:22:38.021 } 00:22:38.021 }, 00:22:38.021 { 00:22:38.021 "method": "bdev_nvme_set_options", 00:22:38.021 "params": { 00:22:38.021 "action_on_timeout": "none", 00:22:38.021 "allow_accel_sequence": false, 00:22:38.021 "arbitration_burst": 0, 00:22:38.021 "bdev_retry_count": 3, 00:22:38.021 "ctrlr_loss_timeout_sec": 0, 00:22:38.021 "delay_cmd_submit": true, 00:22:38.021 "dhchap_dhgroups": [ 00:22:38.021 "null", 00:22:38.021 "ffdhe2048", 00:22:38.021 "ffdhe3072", 00:22:38.021 "ffdhe4096", 00:22:38.021 "ffdhe6144", 00:22:38.021 "ffdhe8192" 00:22:38.021 ], 00:22:38.021 "dhchap_digests": [ 00:22:38.021 "sha256", 00:22:38.021 "sha384", 00:22:38.021 "sha512" 00:22:38.021 ], 00:22:38.021 "disable_auto_failback": false, 00:22:38.021 "fast_io_fail_timeout_sec": 0, 00:22:38.021 "generate_uuids": false, 00:22:38.021 "high_priority_weight": 0, 00:22:38.021 "io_path_stat": false, 00:22:38.021 "io_queue_requests": 0, 00:22:38.021 "keep_alive_timeout_ms": 10000, 00:22:38.021 "low_priority_weight": 0, 00:22:38.021 "medium_priority_weight": 0, 00:22:38.021 "nvme_adminq_poll_period_us": 10000, 00:22:38.021 "nvme_error_stat": false, 00:22:38.021 "nvme_ioq_poll_period_us": 0, 00:22:38.021 "rdma_cm_event_timeout_ms": 0, 00:22:38.021 "rdma_max_cq_size": 0, 00:22:38.021 "rdma_srq_size": 0, 00:22:38.021 "reconnect_delay_sec": 0, 00:22:38.021 "timeout_admin_us": 0, 00:22:38.021 "timeout_us": 0, 00:22:38.021 "transport_ack_timeout": 0, 00:22:38.021 "transport_retry_count": 4, 00:22:38.021 "transport_tos": 0 00:22:38.021 } 00:22:38.021 }, 00:22:38.021 { 00:22:38.021 "method": "bdev_nvme_set_hotplug", 00:22:38.021 "params": { 00:22:38.021 "enable": false, 00:22:38.021 "period_us": 100000 00:22:38.021 } 00:22:38.021 }, 00:22:38.021 { 00:22:38.021 "method": "bdev_malloc_create", 00:22:38.021 "params": { 00:22:38.021 "block_size": 4096, 00:22:38.021 "name": "malloc0", 00:22:38.021 "num_blocks": 8192, 00:22:38.021 "optimal_io_boundary": 0, 00:22:38.021 "physical_block_size": 4096, 00:22:38.021 "uuid": "e4f02bf3-7476-499b-ae14-d62beac61820" 00:22:38.021 } 00:22:38.021 }, 00:22:38.021 { 00:22:38.021 "method": "bdev_wait_for_examine" 00:22:38.021 } 00:22:38.021 ] 00:22:38.021 }, 00:22:38.021 { 00:22:38.021 "subsystem": "nbd", 00:22:38.021 "config": [] 00:22:38.021 }, 00:22:38.021 { 00:22:38.021 "subsystem": "scheduler", 00:22:38.021 "config": [ 00:22:38.021 { 00:22:38.021 "method": "framework_set_scheduler", 00:22:38.021 "params": { 00:22:38.021 "name": "static" 00:22:38.021 } 00:22:38.021 } 00:22:38.021 ] 00:22:38.021 }, 00:22:38.021 { 00:22:38.021 "subsystem": "nvmf", 00:22:38.021 "config": [ 00:22:38.021 { 00:22:38.021 "method": "nvmf_set_config", 00:22:38.021 "params": { 00:22:38.021 "admin_cmd_passthru": { 00:22:38.022 "identify_ctrlr": false 00:22:38.022 }, 00:22:38.022 "discovery_filter": "match_any" 00:22:38.022 } 00:22:38.022 }, 00:22:38.022 { 00:22:38.022 "method": "nvmf_set_max_subsystems", 00:22:38.022 "params": { 00:22:38.022 "max_subsystems": 1024 00:22:38.022 } 00:22:38.022 }, 00:22:38.022 { 00:22:38.022 "method": "nvmf_set_crdt", 00:22:38.022 "params": { 00:22:38.022 "crdt1": 0, 00:22:38.022 "crdt2": 0, 00:22:38.022 "crdt3": 0 00:22:38.022 } 00:22:38.022 }, 00:22:38.022 { 00:22:38.022 "method": "nvmf_create_transport", 00:22:38.022 "params": { 00:22:38.022 "abort_timeout_sec": 1, 00:22:38.022 "ack_timeout": 0, 00:22:38.022 "buf_cache_size": 4294967295, 00:22:38.022 "c2h_success": false, 00:22:38.022 "data_wr_pool_size": 0, 00:22:38.022 "dif_insert_or_strip": false, 00:22:38.022 "in_capsule_data_size": 4096, 00:22:38.022 "io_unit_size": 131072, 00:22:38.022 "max_aq_depth": 128, 00:22:38.022 "max_io_qpairs_per_ctrlr": 127, 00:22:38.022 "max_io_size": 131072, 00:22:38.022 "max_queue_depth": 128, 00:22:38.022 "num_shared_buffers": 511, 00:22:38.022 "sock_priority": 0, 00:22:38.022 "trtype": "TCP", 00:22:38.022 "zcopy": false 00:22:38.022 } 00:22:38.022 }, 00:22:38.022 { 00:22:38.022 "method": "nvmf_create_subsystem", 00:22:38.022 "params": { 00:22:38.022 "allow_any_host": false, 00:22:38.022 "ana_reporting": false, 00:22:38.022 "max_cntlid": 65519, 00:22:38.022 "max_namespaces": 10, 00:22:38.022 "min_cntlid": 1, 00:22:38.022 "model_number": "SPDK bdev Controller", 00:22:38.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.022 "serial_number": "SPDK00000000000001" 00:22:38.022 } 00:22:38.022 }, 00:22:38.022 { 00:22:38.022 "method": "nvmf_subsystem_add_host", 00:22:38.022 "params": { 00:22:38.022 "host": "nqn.2016-06.io.spdk:host1", 00:22:38.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.022 "psk": "/tmp/tmp.y9j5LVBeaS" 00:22:38.022 } 00:22:38.022 }, 00:22:38.022 { 00:22:38.022 "method": "nvmf_subsystem_add_ns", 00:22:38.022 "params": { 00:22:38.022 "namespace": { 00:22:38.022 "bdev_name": "malloc0", 00:22:38.022 "nguid": "E4F02BF37476499BAE14D62BEAC61820", 00:22:38.022 "no_auto_visible": false, 00:22:38.022 "nsid": 1, 00:22:38.022 "uuid": "e4f02bf3-7476-499b-ae14-d62beac61820" 00:22:38.022 }, 00:22:38.022 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:38.022 } 00:22:38.022 }, 00:22:38.022 { 00:22:38.022 "method": "nvmf_subsystem_add_listener", 00:22:38.022 "params": { 00:22:38.022 "listen_address": { 00:22:38.022 "adrfam": "IPv4", 00:22:38.022 "traddr": "10.0.0.2", 00:22:38.022 "trsvcid": "4420", 00:22:38.022 "trtype": "TCP" 00:22:38.022 }, 00:22:38.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.022 "secure_channel": true 00:22:38.022 } 00:22:38.022 } 00:22:38.022 ] 00:22:38.022 } 00:22:38.022 ] 00:22:38.022 }' 00:22:38.281 02:23:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=80233 00:22:38.281 02:23:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 80233 00:22:38.281 02:23:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:38.281 02:23:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 80233 ']' 00:22:38.281 02:23:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.281 02:23:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:38.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.281 02:23:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.281 02:23:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:38.281 02:23:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.281 [2024-05-15 02:23:26.088777] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:38.281 [2024-05-15 02:23:26.088869] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.281 [2024-05-15 02:23:26.221975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.281 [2024-05-15 02:23:26.279316] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.281 [2024-05-15 02:23:26.279372] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.281 [2024-05-15 02:23:26.279394] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.281 [2024-05-15 02:23:26.279404] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.281 [2024-05-15 02:23:26.279411] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.281 [2024-05-15 02:23:26.279490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.540 [2024-05-15 02:23:26.453818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.540 [2024-05-15 02:23:26.469749] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:38.540 [2024-05-15 02:23:26.485717] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:38.540 [2024-05-15 02:23:26.485791] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:38.540 [2024-05-15 02:23:26.485955] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.108 02:23:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:39.108 02:23:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:39.108 02:23:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.108 02:23:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.108 02:23:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.368 02:23:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.368 02:23:27 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=80267 00:22:39.368 02:23:27 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 80267 /var/tmp/bdevperf.sock 00:22:39.368 02:23:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 80267 ']' 00:22:39.368 02:23:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.368 02:23:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:39.368 02:23:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.368 02:23:27 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:39.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.368 02:23:27 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:39.368 "subsystems": [ 00:22:39.368 { 00:22:39.368 "subsystem": "keyring", 00:22:39.368 "config": [] 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "subsystem": "iobuf", 00:22:39.368 "config": [ 00:22:39.368 { 00:22:39.368 "method": "iobuf_set_options", 00:22:39.368 "params": { 00:22:39.368 "large_bufsize": 135168, 00:22:39.368 "large_pool_count": 1024, 00:22:39.368 "small_bufsize": 8192, 00:22:39.368 "small_pool_count": 8192 00:22:39.368 } 00:22:39.368 } 00:22:39.368 ] 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "subsystem": "sock", 00:22:39.368 "config": [ 00:22:39.368 { 00:22:39.368 "method": "sock_impl_set_options", 00:22:39.368 "params": { 00:22:39.368 "enable_ktls": false, 00:22:39.368 "enable_placement_id": 0, 00:22:39.368 "enable_quickack": false, 00:22:39.368 "enable_recv_pipe": true, 00:22:39.368 "enable_zerocopy_send_client": false, 00:22:39.368 "enable_zerocopy_send_server": true, 00:22:39.368 "impl_name": "posix", 00:22:39.368 "recv_buf_size": 2097152, 00:22:39.368 "send_buf_size": 2097152, 00:22:39.368 "tls_version": 0, 00:22:39.368 "zerocopy_threshold": 0 00:22:39.368 } 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "method": "sock_impl_set_options", 00:22:39.368 "params": { 00:22:39.368 "enable_ktls": false, 00:22:39.368 "enable_placement_id": 0, 00:22:39.368 "enable_quickack": false, 00:22:39.368 "enable_recv_pipe": true, 00:22:39.368 "enable_zerocopy_send_client": false, 00:22:39.368 "enable_zerocopy_send_server": true, 00:22:39.368 "impl_name": "ssl", 00:22:39.368 "recv_buf_size": 4096, 00:22:39.368 "send_buf_size": 4096, 00:22:39.368 "tls_version": 0, 00:22:39.368 "zerocopy_threshold": 0 00:22:39.368 } 00:22:39.368 } 00:22:39.368 ] 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "subsystem": "vmd", 00:22:39.368 "config": [] 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "subsystem": "accel", 00:22:39.368 "config": [ 00:22:39.368 { 00:22:39.368 "method": "accel_set_options", 00:22:39.368 "params": { 00:22:39.368 "buf_count": 2048, 00:22:39.368 "large_cache_size": 16, 00:22:39.368 "sequence_count": 2048, 00:22:39.368 "small_cache_size": 128, 00:22:39.368 "task_count": 2048 00:22:39.368 } 00:22:39.368 } 00:22:39.368 ] 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "subsystem": "bdev", 00:22:39.368 "config": [ 00:22:39.368 { 00:22:39.368 "method": "bdev_set_options", 00:22:39.368 "params": { 00:22:39.368 "bdev_auto_examine": true, 00:22:39.368 "bdev_io_cache_size": 256, 00:22:39.368 "bdev_io_pool_size": 65535, 00:22:39.368 "iobuf_large_cache_size": 16, 00:22:39.368 "iobuf_small_cache_size": 128 00:22:39.368 } 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "method": "bdev_raid_set_options", 00:22:39.368 "params": { 00:22:39.368 "process_window_size_kb": 1024 00:22:39.368 } 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "method": "bdev_iscsi_set_options", 00:22:39.368 "params": { 00:22:39.368 "timeout_sec": 30 00:22:39.368 } 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "method": "bdev_nvme_set_options", 00:22:39.368 "params": { 00:22:39.368 "action_on_timeout": "none", 00:22:39.368 "allow_accel_sequence": false, 00:22:39.368 "arbitration_burst": 0, 00:22:39.368 "bdev_retry_count": 3, 00:22:39.368 "ctrlr_loss_timeout_sec": 0, 00:22:39.368 "delay_cmd_submit": true, 00:22:39.368 "dhchap_dhgroups": [ 00:22:39.368 "null", 00:22:39.368 "ffdhe2048", 00:22:39.368 "ffdhe3072", 00:22:39.368 "ffdhe4096", 00:22:39.368 "ffdhe6144", 00:22:39.368 "ffdhe8192" 00:22:39.368 ], 00:22:39.368 "dhchap_digests": [ 00:22:39.368 "sha256", 00:22:39.368 "sha384", 00:22:39.368 "sha512" 00:22:39.368 ], 00:22:39.368 "disable_auto_failback": false, 00:22:39.368 "fast_io_fail_timeout_sec": 0, 00:22:39.368 "generate_uuids": false, 00:22:39.368 "high_priority_weight": 0, 00:22:39.368 "io_path_stat": false, 00:22:39.368 "io_queue_requests": 512, 00:22:39.368 "keep_alive_timeout_ms": 10000, 00:22:39.368 "low_priority_weight": 0, 00:22:39.368 "medium_priority_weight": 0, 00:22:39.368 "nvme_adminq_poll_period_us": 10000, 00:22:39.368 "nvme_error_stat": false, 00:22:39.368 "nvme_ioq_poll_period_us": 0, 00:22:39.368 "rdma_cm_event_timeout_ms": 0, 00:22:39.368 "rdma_max_cq_size": 0, 00:22:39.368 "rdma_srq_size": 0, 00:22:39.368 "reconnect_delay_sec": 0, 00:22:39.368 "timeout_admin_us": 0, 00:22:39.368 "timeout_us": 0, 00:22:39.368 "transport_ack_timeout": 0, 00:22:39.368 "transport_retry_count": 4, 00:22:39.368 "transport_tos": 0 00:22:39.368 } 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "method": "bdev_nvme_attach_controller", 00:22:39.368 "params": { 00:22:39.368 "adrfam": "IPv4", 00:22:39.368 "ctrlr_loss_timeout_sec": 0, 00:22:39.368 "ddgst": false, 00:22:39.368 "fast_io_fail_timeout_sec": 0, 00:22:39.368 "hdgst": false, 00:22:39.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:39.368 "name": "TLSTEST", 00:22:39.368 "prchk_guard": false, 00:22:39.368 "prchk_reftag": false, 00:22:39.368 "psk": "/tmp/tmp.y9j5LVBeaS", 00:22:39.368 "reconnect_delay_sec": 0, 00:22:39.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.368 "traddr": "10.0.0.2", 00:22:39.368 "trsvcid": "4420", 00:22:39.368 "trtype": "TCP" 00:22:39.368 } 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "method": "bdev_nvme_set_hotplug", 00:22:39.368 "params": { 00:22:39.368 "enable": false, 00:22:39.368 "period_us": 100000 00:22:39.368 } 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "method": "bdev_wait_for_examine" 00:22:39.368 } 00:22:39.368 ] 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "subsystem": "nbd", 00:22:39.368 "config": [] 00:22:39.368 } 00:22:39.368 ] 00:22:39.368 }' 00:22:39.368 02:23:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:39.368 02:23:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.368 [2024-05-15 02:23:27.207345] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:39.368 [2024-05-15 02:23:27.207459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80267 ] 00:22:39.368 [2024-05-15 02:23:27.345162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.627 [2024-05-15 02:23:27.445088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.627 [2024-05-15 02:23:27.568874] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.627 [2024-05-15 02:23:27.569014] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:40.195 02:23:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:40.195 02:23:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:40.195 02:23:28 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:40.454 Running I/O for 10 seconds... 00:22:50.422 00:22:50.422 Latency(us) 00:22:50.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.422 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:50.422 Verification LBA range: start 0x0 length 0x2000 00:22:50.422 TLSTESTn1 : 10.02 3944.45 15.41 0.00 0.00 32386.98 6494.02 32410.53 00:22:50.422 =================================================================================================================== 00:22:50.422 Total : 3944.45 15.41 0.00 0.00 32386.98 6494.02 32410.53 00:22:50.422 0 00:22:50.422 02:23:38 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.422 02:23:38 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 80267 00:22:50.422 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 80267 ']' 00:22:50.422 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 80267 00:22:50.422 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:50.422 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:50.422 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80267 00:22:50.422 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:50.422 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:50.422 killing process with pid 80267 00:22:50.422 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80267' 00:22:50.422 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 80267 00:22:50.422 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.422 00:22:50.422 Latency(us) 00:22:50.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.422 =================================================================================================================== 00:22:50.422 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.422 [2024-05-15 02:23:38.333684] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:50.422 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 80267 00:22:50.680 02:23:38 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 80233 00:22:50.680 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 80233 ']' 00:22:50.680 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 80233 00:22:50.680 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:50.680 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:50.680 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80233 00:22:50.680 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:50.681 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:50.681 killing process with pid 80233 00:22:50.681 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80233' 00:22:50.681 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 80233 00:22:50.681 [2024-05-15 02:23:38.540977] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:50.681 [2024-05-15 02:23:38.541016] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:50.681 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 80233 00:22:50.938 02:23:38 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:50.938 02:23:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:50.938 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:50.938 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.938 02:23:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=80347 00:22:50.938 02:23:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:50.938 02:23:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 80347 00:22:50.938 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 80347 ']' 00:22:50.938 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.938 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:50.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.938 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.938 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:50.938 02:23:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.938 [2024-05-15 02:23:38.803741] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:50.938 [2024-05-15 02:23:38.803848] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.938 [2024-05-15 02:23:38.938150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.195 [2024-05-15 02:23:38.997650] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.195 [2024-05-15 02:23:38.997703] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.195 [2024-05-15 02:23:38.997715] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.195 [2024-05-15 02:23:38.997723] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.195 [2024-05-15 02:23:38.997730] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.195 [2024-05-15 02:23:38.997762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.126 02:23:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:52.126 02:23:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:52.126 02:23:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:52.126 02:23:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.126 02:23:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.126 02:23:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.126 02:23:39 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.y9j5LVBeaS 00:22:52.126 02:23:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.y9j5LVBeaS 00:22:52.126 02:23:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:52.126 [2024-05-15 02:23:40.074448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.126 02:23:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:52.414 02:23:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:52.689 [2024-05-15 02:23:40.622535] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:52.689 [2024-05-15 02:23:40.622650] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.689 [2024-05-15 02:23:40.622826] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.689 02:23:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:52.956 malloc0 00:22:52.956 02:23:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:53.215 02:23:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.y9j5LVBeaS 00:22:53.473 [2024-05-15 02:23:41.381485] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:53.473 02:23:41 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=80437 00:22:53.473 02:23:41 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:53.473 02:23:41 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.473 02:23:41 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 80437 /var/tmp/bdevperf.sock 00:22:53.473 02:23:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 80437 ']' 00:22:53.473 02:23:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.473 02:23:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:53.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.473 02:23:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.473 02:23:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:53.473 02:23:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.473 [2024-05-15 02:23:41.458915] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:53.473 [2024-05-15 02:23:41.459011] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80437 ] 00:22:53.731 [2024-05-15 02:23:41.599145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.731 [2024-05-15 02:23:41.669405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.666 02:23:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:54.666 02:23:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:54.666 02:23:42 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.y9j5LVBeaS 00:22:54.924 02:23:42 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:55.182 [2024-05-15 02:23:43.010724] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:55.182 nvme0n1 00:22:55.182 02:23:43 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:55.440 Running I/O for 1 seconds... 00:22:56.372 00:22:56.372 Latency(us) 00:22:56.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.372 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:56.372 Verification LBA range: start 0x0 length 0x2000 00:22:56.372 nvme0n1 : 1.02 3739.20 14.61 0.00 0.00 33858.75 7745.16 44802.79 00:22:56.372 =================================================================================================================== 00:22:56.372 Total : 3739.20 14.61 0.00 0.00 33858.75 7745.16 44802.79 00:22:56.372 0 00:22:56.372 02:23:44 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 80437 00:22:56.372 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 80437 ']' 00:22:56.372 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 80437 00:22:56.372 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:56.372 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:56.372 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80437 00:22:56.372 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:56.372 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:56.372 killing process with pid 80437 00:22:56.372 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80437' 00:22:56.372 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 80437 00:22:56.372 Received shutdown signal, test time was about 1.000000 seconds 00:22:56.372 00:22:56.372 Latency(us) 00:22:56.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.372 =================================================================================================================== 00:22:56.372 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.372 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 80437 00:22:56.629 02:23:44 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 80347 00:22:56.630 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 80347 ']' 00:22:56.630 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 80347 00:22:56.630 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:56.630 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:56.630 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80347 00:22:56.630 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:56.630 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:56.630 killing process with pid 80347 00:22:56.630 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80347' 00:22:56.630 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 80347 00:22:56.630 [2024-05-15 02:23:44.492895] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:56.630 [2024-05-15 02:23:44.492939] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:56.630 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 80347 00:22:56.888 02:23:44 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:22:56.888 02:23:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:56.888 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:56.888 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.888 02:23:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=80490 00:22:56.888 02:23:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 80490 00:22:56.888 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 80490 ']' 00:22:56.888 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.888 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:56.888 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.888 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:56.888 02:23:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.888 02:23:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:56.888 [2024-05-15 02:23:44.748241] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:56.888 [2024-05-15 02:23:44.749198] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.888 [2024-05-15 02:23:44.891339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.146 [2024-05-15 02:23:44.963670] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.147 [2024-05-15 02:23:44.963723] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.147 [2024-05-15 02:23:44.963736] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.147 [2024-05-15 02:23:44.963746] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.147 [2024-05-15 02:23:44.963755] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.147 [2024-05-15 02:23:44.963788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.081 [2024-05-15 02:23:45.772580] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.081 malloc0 00:22:58.081 [2024-05-15 02:23:45.799321] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:58.081 [2024-05-15 02:23:45.799537] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:58.081 [2024-05-15 02:23:45.799715] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=80534 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 80534 /var/tmp/bdevperf.sock 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 80534 ']' 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:58.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:58.081 02:23:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.081 [2024-05-15 02:23:45.874185] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:58.081 [2024-05-15 02:23:45.874271] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80534 ] 00:22:58.081 [2024-05-15 02:23:46.012548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.340 [2024-05-15 02:23:46.105581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.276 02:23:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:59.276 02:23:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:59.276 02:23:46 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.y9j5LVBeaS 00:22:59.276 02:23:47 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:59.534 [2024-05-15 02:23:47.416604] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.534 nvme0n1 00:22:59.534 02:23:47 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:59.792 Running I/O for 1 seconds... 00:23:00.728 00:23:00.728 Latency(us) 00:23:00.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.728 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:00.728 Verification LBA range: start 0x0 length 0x2000 00:23:00.728 nvme0n1 : 1.02 3794.64 14.82 0.00 0.00 33369.84 7477.06 28597.53 00:23:00.728 =================================================================================================================== 00:23:00.728 Total : 3794.64 14.82 0.00 0.00 33369.84 7477.06 28597.53 00:23:00.728 0 00:23:00.728 02:23:48 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:00.728 02:23:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.728 02:23:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.987 02:23:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.987 02:23:48 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:00.987 "subsystems": [ 00:23:00.987 { 00:23:00.987 "subsystem": "keyring", 00:23:00.987 "config": [ 00:23:00.987 { 00:23:00.987 "method": "keyring_file_add_key", 00:23:00.987 "params": { 00:23:00.987 "name": "key0", 00:23:00.987 "path": "/tmp/tmp.y9j5LVBeaS" 00:23:00.987 } 00:23:00.987 } 00:23:00.987 ] 00:23:00.987 }, 00:23:00.987 { 00:23:00.987 "subsystem": "iobuf", 00:23:00.987 "config": [ 00:23:00.987 { 00:23:00.987 "method": "iobuf_set_options", 00:23:00.987 "params": { 00:23:00.987 "large_bufsize": 135168, 00:23:00.987 "large_pool_count": 1024, 00:23:00.987 "small_bufsize": 8192, 00:23:00.987 "small_pool_count": 8192 00:23:00.987 } 00:23:00.987 } 00:23:00.987 ] 00:23:00.987 }, 00:23:00.987 { 00:23:00.987 "subsystem": "sock", 00:23:00.987 "config": [ 00:23:00.987 { 00:23:00.987 "method": "sock_impl_set_options", 00:23:00.987 "params": { 00:23:00.987 "enable_ktls": false, 00:23:00.987 "enable_placement_id": 0, 00:23:00.987 "enable_quickack": false, 00:23:00.987 "enable_recv_pipe": true, 00:23:00.987 "enable_zerocopy_send_client": false, 00:23:00.987 "enable_zerocopy_send_server": true, 00:23:00.987 "impl_name": "posix", 00:23:00.987 "recv_buf_size": 2097152, 00:23:00.987 "send_buf_size": 2097152, 00:23:00.987 "tls_version": 0, 00:23:00.987 "zerocopy_threshold": 0 00:23:00.987 } 00:23:00.987 }, 00:23:00.987 { 00:23:00.987 "method": "sock_impl_set_options", 00:23:00.987 "params": { 00:23:00.987 "enable_ktls": false, 00:23:00.987 "enable_placement_id": 0, 00:23:00.987 "enable_quickack": false, 00:23:00.987 "enable_recv_pipe": true, 00:23:00.987 "enable_zerocopy_send_client": false, 00:23:00.987 "enable_zerocopy_send_server": true, 00:23:00.987 "impl_name": "ssl", 00:23:00.987 "recv_buf_size": 4096, 00:23:00.987 "send_buf_size": 4096, 00:23:00.987 "tls_version": 0, 00:23:00.987 "zerocopy_threshold": 0 00:23:00.987 } 00:23:00.987 } 00:23:00.987 ] 00:23:00.987 }, 00:23:00.987 { 00:23:00.987 "subsystem": "vmd", 00:23:00.987 "config": [] 00:23:00.987 }, 00:23:00.987 { 00:23:00.987 "subsystem": "accel", 00:23:00.987 "config": [ 00:23:00.987 { 00:23:00.987 "method": "accel_set_options", 00:23:00.987 "params": { 00:23:00.987 "buf_count": 2048, 00:23:00.987 "large_cache_size": 16, 00:23:00.987 "sequence_count": 2048, 00:23:00.987 "small_cache_size": 128, 00:23:00.987 "task_count": 2048 00:23:00.987 } 00:23:00.987 } 00:23:00.987 ] 00:23:00.987 }, 00:23:00.987 { 00:23:00.987 "subsystem": "bdev", 00:23:00.987 "config": [ 00:23:00.987 { 00:23:00.987 "method": "bdev_set_options", 00:23:00.987 "params": { 00:23:00.987 "bdev_auto_examine": true, 00:23:00.987 "bdev_io_cache_size": 256, 00:23:00.987 "bdev_io_pool_size": 65535, 00:23:00.987 "iobuf_large_cache_size": 16, 00:23:00.987 "iobuf_small_cache_size": 128 00:23:00.987 } 00:23:00.987 }, 00:23:00.987 { 00:23:00.987 "method": "bdev_raid_set_options", 00:23:00.987 "params": { 00:23:00.987 "process_window_size_kb": 1024 00:23:00.987 } 00:23:00.987 }, 00:23:00.987 { 00:23:00.987 "method": "bdev_iscsi_set_options", 00:23:00.987 "params": { 00:23:00.987 "timeout_sec": 30 00:23:00.987 } 00:23:00.987 }, 00:23:00.987 { 00:23:00.987 "method": "bdev_nvme_set_options", 00:23:00.987 "params": { 00:23:00.987 "action_on_timeout": "none", 00:23:00.987 "allow_accel_sequence": false, 00:23:00.987 "arbitration_burst": 0, 00:23:00.987 "bdev_retry_count": 3, 00:23:00.987 "ctrlr_loss_timeout_sec": 0, 00:23:00.987 "delay_cmd_submit": true, 00:23:00.987 "dhchap_dhgroups": [ 00:23:00.987 "null", 00:23:00.987 "ffdhe2048", 00:23:00.987 "ffdhe3072", 00:23:00.987 "ffdhe4096", 00:23:00.987 "ffdhe6144", 00:23:00.987 "ffdhe8192" 00:23:00.987 ], 00:23:00.987 "dhchap_digests": [ 00:23:00.987 "sha256", 00:23:00.987 "sha384", 00:23:00.987 "sha512" 00:23:00.987 ], 00:23:00.987 "disable_auto_failback": false, 00:23:00.987 "fast_io_fail_timeout_sec": 0, 00:23:00.987 "generate_uuids": false, 00:23:00.987 "high_priority_weight": 0, 00:23:00.987 "io_path_stat": false, 00:23:00.987 "io_queue_requests": 0, 00:23:00.987 "keep_alive_timeout_ms": 10000, 00:23:00.987 "low_priority_weight": 0, 00:23:00.987 "medium_priority_weight": 0, 00:23:00.987 "nvme_adminq_poll_period_us": 10000, 00:23:00.987 "nvme_error_stat": false, 00:23:00.987 "nvme_ioq_poll_period_us": 0, 00:23:00.987 "rdma_cm_event_timeout_ms": 0, 00:23:00.987 "rdma_max_cq_size": 0, 00:23:00.987 "rdma_srq_size": 0, 00:23:00.987 "reconnect_delay_sec": 0, 00:23:00.987 "timeout_admin_us": 0, 00:23:00.987 "timeout_us": 0, 00:23:00.987 "transport_ack_timeout": 0, 00:23:00.987 "transport_retry_count": 4, 00:23:00.987 "transport_tos": 0 00:23:00.987 } 00:23:00.987 }, 00:23:00.987 { 00:23:00.987 "method": "bdev_nvme_set_hotplug", 00:23:00.987 "params": { 00:23:00.987 "enable": false, 00:23:00.987 "period_us": 100000 00:23:00.987 } 00:23:00.987 }, 00:23:00.987 { 00:23:00.987 "method": "bdev_malloc_create", 00:23:00.987 "params": { 00:23:00.987 "block_size": 4096, 00:23:00.987 "name": "malloc0", 00:23:00.987 "num_blocks": 8192, 00:23:00.987 "optimal_io_boundary": 0, 00:23:00.987 "physical_block_size": 4096, 00:23:00.987 "uuid": "9962a88e-4bed-4725-861b-dc758a54f142" 00:23:00.987 } 00:23:00.987 }, 00:23:00.987 { 00:23:00.987 "method": "bdev_wait_for_examine" 00:23:00.987 } 00:23:00.987 ] 00:23:00.987 }, 00:23:00.987 { 00:23:00.987 "subsystem": "nbd", 00:23:00.987 "config": [] 00:23:00.987 }, 00:23:00.987 { 00:23:00.987 "subsystem": "scheduler", 00:23:00.987 "config": [ 00:23:00.987 { 00:23:00.987 "method": "framework_set_scheduler", 00:23:00.987 "params": { 00:23:00.987 "name": "static" 00:23:00.987 } 00:23:00.987 } 00:23:00.987 ] 00:23:00.987 }, 00:23:00.987 { 00:23:00.987 "subsystem": "nvmf", 00:23:00.987 "config": [ 00:23:00.987 { 00:23:00.987 "method": "nvmf_set_config", 00:23:00.987 "params": { 00:23:00.987 "admin_cmd_passthru": { 00:23:00.987 "identify_ctrlr": false 00:23:00.987 }, 00:23:00.987 "discovery_filter": "match_any" 00:23:00.987 } 00:23:00.987 }, 00:23:00.987 { 00:23:00.988 "method": "nvmf_set_max_subsystems", 00:23:00.988 "params": { 00:23:00.988 "max_subsystems": 1024 00:23:00.988 } 00:23:00.988 }, 00:23:00.988 { 00:23:00.988 "method": "nvmf_set_crdt", 00:23:00.988 "params": { 00:23:00.988 "crdt1": 0, 00:23:00.988 "crdt2": 0, 00:23:00.988 "crdt3": 0 00:23:00.988 } 00:23:00.988 }, 00:23:00.988 { 00:23:00.988 "method": "nvmf_create_transport", 00:23:00.988 "params": { 00:23:00.988 "abort_timeout_sec": 1, 00:23:00.988 "ack_timeout": 0, 00:23:00.988 "buf_cache_size": 4294967295, 00:23:00.988 "c2h_success": false, 00:23:00.988 "data_wr_pool_size": 0, 00:23:00.988 "dif_insert_or_strip": false, 00:23:00.988 "in_capsule_data_size": 4096, 00:23:00.988 "io_unit_size": 131072, 00:23:00.988 "max_aq_depth": 128, 00:23:00.988 "max_io_qpairs_per_ctrlr": 127, 00:23:00.988 "max_io_size": 131072, 00:23:00.988 "max_queue_depth": 128, 00:23:00.988 "num_shared_buffers": 511, 00:23:00.988 "sock_priority": 0, 00:23:00.988 "trtype": "TCP", 00:23:00.988 "zcopy": false 00:23:00.988 } 00:23:00.988 }, 00:23:00.988 { 00:23:00.988 "method": "nvmf_create_subsystem", 00:23:00.988 "params": { 00:23:00.988 "allow_any_host": false, 00:23:00.988 "ana_reporting": false, 00:23:00.988 "max_cntlid": 65519, 00:23:00.988 "max_namespaces": 32, 00:23:00.988 "min_cntlid": 1, 00:23:00.988 "model_number": "SPDK bdev Controller", 00:23:00.988 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.988 "serial_number": "00000000000000000000" 00:23:00.988 } 00:23:00.988 }, 00:23:00.988 { 00:23:00.988 "method": "nvmf_subsystem_add_host", 00:23:00.988 "params": { 00:23:00.988 "host": "nqn.2016-06.io.spdk:host1", 00:23:00.988 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.988 "psk": "key0" 00:23:00.988 } 00:23:00.988 }, 00:23:00.988 { 00:23:00.988 "method": "nvmf_subsystem_add_ns", 00:23:00.988 "params": { 00:23:00.988 "namespace": { 00:23:00.988 "bdev_name": "malloc0", 00:23:00.988 "nguid": "9962A88E4BED4725861BDC758A54F142", 00:23:00.988 "no_auto_visible": false, 00:23:00.988 "nsid": 1, 00:23:00.988 "uuid": "9962a88e-4bed-4725-861b-dc758a54f142" 00:23:00.988 }, 00:23:00.988 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:00.988 } 00:23:00.988 }, 00:23:00.988 { 00:23:00.988 "method": "nvmf_subsystem_add_listener", 00:23:00.988 "params": { 00:23:00.988 "listen_address": { 00:23:00.988 "adrfam": "IPv4", 00:23:00.988 "traddr": "10.0.0.2", 00:23:00.988 "trsvcid": "4420", 00:23:00.988 "trtype": "TCP" 00:23:00.988 }, 00:23:00.988 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.988 "secure_channel": true 00:23:00.988 } 00:23:00.988 } 00:23:00.988 ] 00:23:00.988 } 00:23:00.988 ] 00:23:00.988 }' 00:23:00.988 02:23:48 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:01.247 02:23:49 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:01.247 "subsystems": [ 00:23:01.247 { 00:23:01.247 "subsystem": "keyring", 00:23:01.247 "config": [ 00:23:01.247 { 00:23:01.247 "method": "keyring_file_add_key", 00:23:01.247 "params": { 00:23:01.247 "name": "key0", 00:23:01.247 "path": "/tmp/tmp.y9j5LVBeaS" 00:23:01.247 } 00:23:01.247 } 00:23:01.247 ] 00:23:01.247 }, 00:23:01.247 { 00:23:01.247 "subsystem": "iobuf", 00:23:01.247 "config": [ 00:23:01.247 { 00:23:01.247 "method": "iobuf_set_options", 00:23:01.247 "params": { 00:23:01.247 "large_bufsize": 135168, 00:23:01.247 "large_pool_count": 1024, 00:23:01.247 "small_bufsize": 8192, 00:23:01.247 "small_pool_count": 8192 00:23:01.247 } 00:23:01.247 } 00:23:01.247 ] 00:23:01.247 }, 00:23:01.247 { 00:23:01.247 "subsystem": "sock", 00:23:01.247 "config": [ 00:23:01.247 { 00:23:01.247 "method": "sock_impl_set_options", 00:23:01.247 "params": { 00:23:01.247 "enable_ktls": false, 00:23:01.247 "enable_placement_id": 0, 00:23:01.247 "enable_quickack": false, 00:23:01.247 "enable_recv_pipe": true, 00:23:01.247 "enable_zerocopy_send_client": false, 00:23:01.247 "enable_zerocopy_send_server": true, 00:23:01.247 "impl_name": "posix", 00:23:01.247 "recv_buf_size": 2097152, 00:23:01.247 "send_buf_size": 2097152, 00:23:01.247 "tls_version": 0, 00:23:01.247 "zerocopy_threshold": 0 00:23:01.247 } 00:23:01.247 }, 00:23:01.247 { 00:23:01.247 "method": "sock_impl_set_options", 00:23:01.247 "params": { 00:23:01.247 "enable_ktls": false, 00:23:01.247 "enable_placement_id": 0, 00:23:01.247 "enable_quickack": false, 00:23:01.247 "enable_recv_pipe": true, 00:23:01.247 "enable_zerocopy_send_client": false, 00:23:01.247 "enable_zerocopy_send_server": true, 00:23:01.247 "impl_name": "ssl", 00:23:01.247 "recv_buf_size": 4096, 00:23:01.247 "send_buf_size": 4096, 00:23:01.247 "tls_version": 0, 00:23:01.247 "zerocopy_threshold": 0 00:23:01.247 } 00:23:01.247 } 00:23:01.247 ] 00:23:01.247 }, 00:23:01.247 { 00:23:01.247 "subsystem": "vmd", 00:23:01.247 "config": [] 00:23:01.247 }, 00:23:01.247 { 00:23:01.247 "subsystem": "accel", 00:23:01.247 "config": [ 00:23:01.247 { 00:23:01.247 "method": "accel_set_options", 00:23:01.247 "params": { 00:23:01.247 "buf_count": 2048, 00:23:01.247 "large_cache_size": 16, 00:23:01.247 "sequence_count": 2048, 00:23:01.247 "small_cache_size": 128, 00:23:01.247 "task_count": 2048 00:23:01.247 } 00:23:01.247 } 00:23:01.247 ] 00:23:01.247 }, 00:23:01.247 { 00:23:01.247 "subsystem": "bdev", 00:23:01.247 "config": [ 00:23:01.247 { 00:23:01.247 "method": "bdev_set_options", 00:23:01.247 "params": { 00:23:01.247 "bdev_auto_examine": true, 00:23:01.247 "bdev_io_cache_size": 256, 00:23:01.247 "bdev_io_pool_size": 65535, 00:23:01.247 "iobuf_large_cache_size": 16, 00:23:01.247 "iobuf_small_cache_size": 128 00:23:01.247 } 00:23:01.247 }, 00:23:01.247 { 00:23:01.247 "method": "bdev_raid_set_options", 00:23:01.247 "params": { 00:23:01.247 "process_window_size_kb": 1024 00:23:01.247 } 00:23:01.247 }, 00:23:01.247 { 00:23:01.247 "method": "bdev_iscsi_set_options", 00:23:01.247 "params": { 00:23:01.247 "timeout_sec": 30 00:23:01.247 } 00:23:01.247 }, 00:23:01.247 { 00:23:01.247 "method": "bdev_nvme_set_options", 00:23:01.247 "params": { 00:23:01.247 "action_on_timeout": "none", 00:23:01.247 "allow_accel_sequence": false, 00:23:01.247 "arbitration_burst": 0, 00:23:01.247 "bdev_retry_count": 3, 00:23:01.247 "ctrlr_loss_timeout_sec": 0, 00:23:01.247 "delay_cmd_submit": true, 00:23:01.247 "dhchap_dhgroups": [ 00:23:01.247 "null", 00:23:01.247 "ffdhe2048", 00:23:01.247 "ffdhe3072", 00:23:01.247 "ffdhe4096", 00:23:01.247 "ffdhe6144", 00:23:01.247 "ffdhe8192" 00:23:01.247 ], 00:23:01.247 "dhchap_digests": [ 00:23:01.247 "sha256", 00:23:01.247 "sha384", 00:23:01.247 "sha512" 00:23:01.247 ], 00:23:01.247 "disable_auto_failback": false, 00:23:01.247 "fast_io_fail_timeout_sec": 0, 00:23:01.247 "generate_uuids": false, 00:23:01.247 "high_priority_weight": 0, 00:23:01.247 "io_path_stat": false, 00:23:01.247 "io_queue_requests": 512, 00:23:01.247 "keep_alive_timeout_ms": 10000, 00:23:01.247 "low_priority_weight": 0, 00:23:01.247 "medium_priority_weight": 0, 00:23:01.247 "nvme_adminq_poll_period_us": 10000, 00:23:01.247 "nvme_error_stat": false, 00:23:01.247 "nvme_ioq_poll_period_us": 0, 00:23:01.247 "rdma_cm_event_timeout_ms": 0, 00:23:01.247 "rdma_max_cq_size": 0, 00:23:01.247 "rdma_srq_size": 0, 00:23:01.247 "reconnect_delay_sec": 0, 00:23:01.247 "timeout_admin_us": 0, 00:23:01.247 "timeout_us": 0, 00:23:01.247 "transport_ack_timeout": 0, 00:23:01.247 "transport_retry_count": 4, 00:23:01.247 "transport_tos": 0 00:23:01.247 } 00:23:01.247 }, 00:23:01.247 { 00:23:01.247 "method": "bdev_nvme_attach_controller", 00:23:01.247 "params": { 00:23:01.247 "adrfam": "IPv4", 00:23:01.247 "ctrlr_loss_timeout_sec": 0, 00:23:01.247 "ddgst": false, 00:23:01.247 "fast_io_fail_timeout_sec": 0, 00:23:01.247 "hdgst": false, 00:23:01.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.247 "name": "nvme0", 00:23:01.247 "prchk_guard": false, 00:23:01.247 "prchk_reftag": false, 00:23:01.247 "psk": "key0", 00:23:01.247 "reconnect_delay_sec": 0, 00:23:01.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.247 "traddr": "10.0.0.2", 00:23:01.247 "trsvcid": "4420", 00:23:01.247 "trtype": "TCP" 00:23:01.247 } 00:23:01.247 }, 00:23:01.247 { 00:23:01.247 "method": "bdev_nvme_set_hotplug", 00:23:01.247 "params": { 00:23:01.247 "enable": false, 00:23:01.247 "period_us": 100000 00:23:01.247 } 00:23:01.247 }, 00:23:01.247 { 00:23:01.247 "method": "bdev_enable_histogram", 00:23:01.247 "params": { 00:23:01.247 "enable": true, 00:23:01.247 "name": "nvme0n1" 00:23:01.247 } 00:23:01.247 }, 00:23:01.247 { 00:23:01.247 "method": "bdev_wait_for_examine" 00:23:01.247 } 00:23:01.247 ] 00:23:01.247 }, 00:23:01.247 { 00:23:01.247 "subsystem": "nbd", 00:23:01.247 "config": [] 00:23:01.247 } 00:23:01.247 ] 00:23:01.247 }' 00:23:01.247 02:23:49 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 80534 00:23:01.247 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 80534 ']' 00:23:01.247 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 80534 00:23:01.247 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:01.247 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:01.247 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80534 00:23:01.247 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:01.247 killing process with pid 80534 00:23:01.247 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:01.247 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80534' 00:23:01.247 Received shutdown signal, test time was about 1.000000 seconds 00:23:01.247 00:23:01.247 Latency(us) 00:23:01.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.247 =================================================================================================================== 00:23:01.248 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.248 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 80534 00:23:01.248 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 80534 00:23:01.506 02:23:49 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 80490 00:23:01.506 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 80490 ']' 00:23:01.506 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 80490 00:23:01.506 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:01.506 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:01.506 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80490 00:23:01.506 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:01.506 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:01.506 killing process with pid 80490 00:23:01.506 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80490' 00:23:01.506 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 80490 00:23:01.506 [2024-05-15 02:23:49.422296] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:01.506 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 80490 00:23:01.765 02:23:49 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:01.766 02:23:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.766 02:23:49 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:01.766 "subsystems": [ 00:23:01.766 { 00:23:01.766 "subsystem": "keyring", 00:23:01.766 "config": [ 00:23:01.766 { 00:23:01.766 "method": "keyring_file_add_key", 00:23:01.766 "params": { 00:23:01.766 "name": "key0", 00:23:01.766 "path": "/tmp/tmp.y9j5LVBeaS" 00:23:01.766 } 00:23:01.766 } 00:23:01.766 ] 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "subsystem": "iobuf", 00:23:01.766 "config": [ 00:23:01.766 { 00:23:01.766 "method": "iobuf_set_options", 00:23:01.766 "params": { 00:23:01.766 "large_bufsize": 135168, 00:23:01.766 "large_pool_count": 1024, 00:23:01.766 "small_bufsize": 8192, 00:23:01.766 "small_pool_count": 8192 00:23:01.766 } 00:23:01.766 } 00:23:01.766 ] 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "subsystem": "sock", 00:23:01.766 "config": [ 00:23:01.766 { 00:23:01.766 "method": "sock_impl_set_options", 00:23:01.766 "params": { 00:23:01.766 "enable_ktls": false, 00:23:01.766 "enable_placement_id": 0, 00:23:01.766 "enable_quickack": false, 00:23:01.766 "enable_recv_pipe": true, 00:23:01.766 "enable_zerocopy_send_client": false, 00:23:01.766 "enable_zerocopy_send_server": true, 00:23:01.766 "impl_name": "posix", 00:23:01.766 "recv_buf_size": 2097152, 00:23:01.766 "send_buf_size": 2097152, 00:23:01.766 "tls_version": 0, 00:23:01.766 "zerocopy_threshold": 0 00:23:01.766 } 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "method": "sock_impl_set_options", 00:23:01.766 "params": { 00:23:01.766 "enable_ktls": false, 00:23:01.766 "enable_placement_id": 0, 00:23:01.766 "enable_quickack": false, 00:23:01.766 "enable_recv_pipe": true, 00:23:01.766 "enable_zerocopy_send_client": false, 00:23:01.766 "enable_zerocopy_send_server": true, 00:23:01.766 "impl_name": "ssl", 00:23:01.766 "recv_buf_size": 4096, 00:23:01.766 "send_buf_size": 4096, 00:23:01.766 "tls_version": 0, 00:23:01.766 "zerocopy_threshold": 0 00:23:01.766 } 00:23:01.766 } 00:23:01.766 ] 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "subsystem": "vmd", 00:23:01.766 "config": [] 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "subsystem": "accel", 00:23:01.766 "config": [ 00:23:01.766 { 00:23:01.766 "method": "accel_set_options", 00:23:01.766 "params": { 00:23:01.766 "buf_count": 2048, 00:23:01.766 "large_cache_size": 16, 00:23:01.766 "sequence_count": 2048, 00:23:01.766 "small_cache_size": 128, 00:23:01.766 "task_count": 2048 00:23:01.766 } 00:23:01.766 } 00:23:01.766 ] 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "subsystem": "bdev", 00:23:01.766 "config": [ 00:23:01.766 { 00:23:01.766 "method": "bdev_set_options", 00:23:01.766 "params": { 00:23:01.766 "bdev_auto_examine": true, 00:23:01.766 "bdev_io_cache_size": 256, 00:23:01.766 "bdev_io_pool_size": 65535, 00:23:01.766 "iobuf_large_cache_size": 16, 00:23:01.766 "iobuf_small_cache_size": 128 00:23:01.766 } 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "method": "bdev_raid_set_options", 00:23:01.766 "params": { 00:23:01.766 "process_window_size_kb": 1024 00:23:01.766 } 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "method": "bdev_iscsi_set_options", 00:23:01.766 "params": { 00:23:01.766 "timeout_sec": 30 00:23:01.766 } 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "method": "bdev_nvme_set_options", 00:23:01.766 "params": { 00:23:01.766 "action_on_timeout": "none", 00:23:01.766 "allow_accel_sequence": false, 00:23:01.766 "arbitration_burst": 0, 00:23:01.766 "bdev_retry_count": 3, 00:23:01.766 "ctrlr_loss_timeout_sec": 0, 00:23:01.766 "delay_cmd_submit": true, 00:23:01.766 "dhchap_dhgroups": [ 00:23:01.766 "null", 00:23:01.766 "ffdhe2048", 00:23:01.766 "ffdhe3072", 00:23:01.766 "ffdhe4096", 00:23:01.766 "ffdhe6144", 00:23:01.766 "ffdhe8192" 00:23:01.766 ], 00:23:01.766 "dhchap_digests": [ 00:23:01.766 "sha256", 00:23:01.766 "sha384", 00:23:01.766 "sha512" 00:23:01.766 ], 00:23:01.766 "disable_auto_failback": false, 00:23:01.766 "fast_io_fail_timeout_sec": 0, 00:23:01.766 "generate_uuids": false, 00:23:01.766 "high_priority_weight": 0, 00:23:01.766 "io_path_stat": false, 00:23:01.766 "io_queue_requests": 0, 00:23:01.766 "keep_alive_timeout_ms": 10000, 00:23:01.766 "low_priority_weight": 0, 00:23:01.766 "medium_priority_weight": 0, 00:23:01.766 "nvme_adminq_poll_period_us": 10000, 00:23:01.766 "nvme_error_stat": false, 00:23:01.766 "nvme_ioq_poll_period_us": 0, 00:23:01.766 "rdma_cm_event_timeout_ms": 0, 00:23:01.766 "rdma_max_cq_size": 0, 00:23:01.766 "rdma_srq_size": 0, 00:23:01.766 "reconnect_delay_sec": 0, 00:23:01.766 "timeout_admin_us": 0, 00:23:01.766 "timeout_us": 0, 00:23:01.766 "transport_ack_timeout": 0, 00:23:01.766 "transport_retry_count": 4, 00:23:01.766 "transport_tos": 0 00:23:01.766 } 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "method": "bdev_nvme_set_hotplug", 00:23:01.766 "params": { 00:23:01.766 "enable": false, 00:23:01.766 "period_us": 100000 00:23:01.766 } 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "method": "bdev_malloc_create", 00:23:01.766 "params": { 00:23:01.766 "block_size": 4096, 00:23:01.766 "name": "malloc0", 00:23:01.766 "num_blocks": 8192, 00:23:01.766 "optimal_io_boundary": 0, 00:23:01.766 "physical_block_size": 4096, 00:23:01.766 "uuid": "9962a88e-4bed-4725-861b-dc758a54f142" 00:23:01.766 } 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "method": "bdev_wait_for_examine" 00:23:01.766 } 00:23:01.766 ] 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "subsystem": "nbd", 00:23:01.766 "config": [] 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "subsystem": "scheduler", 00:23:01.766 "config": [ 00:23:01.766 { 00:23:01.766 "method": "framework_set_scheduler", 00:23:01.766 "params": { 00:23:01.766 "name": "static" 00:23:01.766 } 00:23:01.766 } 00:23:01.766 ] 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "subsystem": "nvmf", 00:23:01.766 "config": [ 00:23:01.766 { 00:23:01.766 "method": "nvmf_set_config", 00:23:01.766 "params": { 00:23:01.766 "admin_cmd_passthru": { 00:23:01.766 "identify_ctrlr": false 00:23:01.766 }, 00:23:01.766 "discovery_filter": "match_any" 00:23:01.766 } 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "method": "nvmf_set_max_subsystems", 00:23:01.766 "params": { 00:23:01.766 "max_subsystems": 1024 00:23:01.766 } 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "method": "nvmf_set_crdt", 00:23:01.766 "params": { 00:23:01.766 "crdt1": 0, 00:23:01.766 "crdt2": 0, 00:23:01.766 "crdt3": 0 00:23:01.766 } 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "method": "nvmf_create_transport", 00:23:01.766 "params": { 00:23:01.766 "abort_timeout_sec": 1, 00:23:01.766 "ack_timeout": 0, 00:23:01.766 "buf_cache_size": 4294967295, 00:23:01.766 "c2h_success": false, 00:23:01.766 "data_wr_pool_size": 0, 00:23:01.766 "dif_insert_or_strip": false, 00:23:01.766 "in_capsule_data_size": 4096, 00:23:01.766 "io_unit_size": 131072, 00:23:01.766 "max_aq_depth": 128, 00:23:01.766 "max_io_qpairs_per_ctrlr": 127, 00:23:01.766 "max_io_size": 131072, 00:23:01.766 "max_queue_depth": 128, 00:23:01.766 "num_shared_buffers": 511, 00:23:01.766 "sock_priority": 0, 00:23:01.766 "trtype": "TCP", 00:23:01.766 "zcopy": false 00:23:01.766 } 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "method": "nvmf_create_subsystem", 00:23:01.766 "params": { 00:23:01.766 "allow_any_host": false, 00:23:01.766 "ana_reporting": false, 00:23:01.766 "max_cntlid": 65519, 00:23:01.766 "max_namespaces": 32, 00:23:01.766 "min_cntlid": 1, 00:23:01.766 "m 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:01.766 odel_number": "SPDK bdev Controller", 00:23:01.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.766 "serial_number": "00000000000000000000" 00:23:01.766 } 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "method": "nvmf_subsystem_add_host", 00:23:01.766 "params": { 00:23:01.766 "host": "nqn.2016-06.io.spdk:host1", 00:23:01.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.766 "psk": "key0" 00:23:01.766 } 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "method": "nvmf_subsystem_add_ns", 00:23:01.766 "params": { 00:23:01.766 "namespace": { 00:23:01.766 "bdev_name": "malloc0", 00:23:01.766 "nguid": "9962A88E4BED4725861BDC758A54F142", 00:23:01.766 "no_auto_visible": false, 00:23:01.766 "nsid": 1, 00:23:01.766 "uuid": "9962a88e-4bed-4725-861b-dc758a54f142" 00:23:01.766 }, 00:23:01.766 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:01.766 } 00:23:01.766 }, 00:23:01.766 { 00:23:01.766 "method": "nvmf_subsystem_add_listener", 00:23:01.766 "params": { 00:23:01.766 "listen_address": { 00:23:01.766 "adrfam": "IPv4", 00:23:01.766 "traddr": "10.0.0.2", 00:23:01.766 "trsvcid": "4420", 00:23:01.766 "trtype": "TCP" 00:23:01.766 }, 00:23:01.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.766 "secure_channel": true 00:23:01.766 } 00:23:01.766 } 00:23:01.766 ] 00:23:01.766 } 00:23:01.766 ] 00:23:01.766 }' 00:23:01.767 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.767 02:23:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=80601 00:23:01.767 02:23:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 80601 00:23:01.767 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 80601 ']' 00:23:01.767 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.767 02:23:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:01.767 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:01.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.767 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.767 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:01.767 02:23:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.767 [2024-05-15 02:23:49.676760] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:01.767 [2024-05-15 02:23:49.676857] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.025 [2024-05-15 02:23:49.818024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.025 [2024-05-15 02:23:49.887753] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.025 [2024-05-15 02:23:49.887804] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.025 [2024-05-15 02:23:49.887818] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.025 [2024-05-15 02:23:49.887829] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.025 [2024-05-15 02:23:49.887839] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.025 [2024-05-15 02:23:49.887929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.283 [2024-05-15 02:23:50.075874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.283 [2024-05-15 02:23:50.107756] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:02.283 [2024-05-15 02:23:50.107841] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:02.283 [2024-05-15 02:23:50.108023] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=80639 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 80639 /var/tmp/bdevperf.sock 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 80639 ']' 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.855 02:23:50 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:02.855 "subsystems": [ 00:23:02.855 { 00:23:02.855 "subsystem": "keyring", 00:23:02.855 "config": [ 00:23:02.855 { 00:23:02.855 "method": "keyring_file_add_key", 00:23:02.855 "params": { 00:23:02.855 "name": "key0", 00:23:02.855 "path": "/tmp/tmp.y9j5LVBeaS" 00:23:02.855 } 00:23:02.855 } 00:23:02.855 ] 00:23:02.855 }, 00:23:02.855 { 00:23:02.855 "subsystem": "iobuf", 00:23:02.855 "config": [ 00:23:02.855 { 00:23:02.855 "method": "iobuf_set_options", 00:23:02.855 "params": { 00:23:02.855 "large_bufsize": 135168, 00:23:02.855 "large_pool_count": 1024, 00:23:02.855 "small_bufsize": 8192, 00:23:02.855 "small_pool_count": 8192 00:23:02.855 } 00:23:02.855 } 00:23:02.855 ] 00:23:02.855 }, 00:23:02.855 { 00:23:02.855 "subsystem": "sock", 00:23:02.855 "config": [ 00:23:02.855 { 00:23:02.855 "method": "sock_impl_set_options", 00:23:02.855 "params": { 00:23:02.855 "enable_ktls": false, 00:23:02.855 "enable_placement_id": 0, 00:23:02.855 "enable_quickack": false, 00:23:02.855 "enable_recv_pipe": true, 00:23:02.855 "enable_zerocopy_send_client": false, 00:23:02.855 "enable_zerocopy_send_server": true, 00:23:02.855 "impl_name": "posix", 00:23:02.855 "recv_buf_size": 2097152, 00:23:02.855 "send_buf_size": 2097152, 00:23:02.855 "tls_version": 0, 00:23:02.855 "zerocopy_threshold": 0 00:23:02.855 } 00:23:02.855 }, 00:23:02.855 { 00:23:02.855 "method": "sock_impl_set_options", 00:23:02.855 "params": { 00:23:02.855 "enable_ktls": false, 00:23:02.855 "enable_placement_id": 0, 00:23:02.855 "enable_quickack": false, 00:23:02.855 "enable_recv_pipe": true, 00:23:02.855 "enable_zerocopy_send_client": false, 00:23:02.855 "enable_zerocopy_send_server": true, 00:23:02.855 "impl_name": "ssl", 00:23:02.855 "recv_buf_size": 4096, 00:23:02.856 "send_buf_size": 4096, 00:23:02.856 "tls_version": 0, 00:23:02.856 "zerocopy_threshold": 0 00:23:02.856 } 00:23:02.856 } 00:23:02.856 ] 00:23:02.856 }, 00:23:02.856 { 00:23:02.856 "subsystem": "vmd", 00:23:02.856 "config": [] 00:23:02.856 }, 00:23:02.856 { 00:23:02.856 "subsystem": "accel", 00:23:02.856 "config": [ 00:23:02.856 { 00:23:02.856 "method": "accel_set_options", 00:23:02.856 "params": { 00:23:02.856 "buf_count": 2048, 00:23:02.856 "large_cache_size": 16, 00:23:02.856 "sequence_count": 2048, 00:23:02.856 "small_cache_size": 128, 00:23:02.856 "task_count": 2048 00:23:02.856 } 00:23:02.856 } 00:23:02.856 ] 00:23:02.856 }, 00:23:02.856 { 00:23:02.856 "subsystem": "bdev", 00:23:02.856 "config": [ 00:23:02.856 { 00:23:02.856 "method": "bdev_set_options", 00:23:02.856 "params": { 00:23:02.856 "bdev_auto_examine": true, 00:23:02.856 "bdev_io_cache_size": 256, 00:23:02.856 "bdev_io_pool_size": 65535, 00:23:02.856 "iobuf_large_cache_size": 16, 00:23:02.856 "iobuf_small_cache_size": 128 00:23:02.856 } 00:23:02.856 }, 00:23:02.856 { 00:23:02.856 "method": "bdev_raid_set_options", 00:23:02.856 "params": { 00:23:02.856 "process_window_size_kb": 1024 00:23:02.856 } 00:23:02.856 }, 00:23:02.856 { 00:23:02.856 "method": "bdev_iscsi_set_options", 00:23:02.856 "params": { 00:23:02.856 "timeout_sec": 30 00:23:02.856 } 00:23:02.856 }, 00:23:02.856 { 00:23:02.856 "method": "bdev_nvme_set_options", 00:23:02.856 "params": { 00:23:02.856 "action_on_timeout": "none", 00:23:02.856 "allow_accel_sequence": false, 00:23:02.856 "arbitration_burst": 0, 00:23:02.856 "bdev_retry_count": 3, 00:23:02.856 "ctrlr_loss_timeout_sec": 0, 00:23:02.856 "delay_cmd_submit": true, 00:23:02.856 "dhchap_dhgroups": [ 00:23:02.856 "null", 00:23:02.856 "ffdhe2048", 00:23:02.856 "ffdhe3072", 00:23:02.856 "ffdhe4096", 00:23:02.856 "ffdhe6144", 00:23:02.856 "ffdhe8192" 00:23:02.856 ], 00:23:02.856 "dhchap_digests": [ 00:23:02.856 "sha256", 00:23:02.856 "sha384", 00:23:02.856 "sha512" 00:23:02.856 ], 00:23:02.856 "disable_auto_failback": false, 00:23:02.856 "fast_io_fail_timeout_sec": 0, 00:23:02.856 "generate_uuids": false, 00:23:02.856 "high_priority_weight": 0, 00:23:02.856 "io_path_stat": false, 00:23:02.856 "io_queue_requests": 512, 00:23:02.856 "keep_alive_timeout_ms": 10000, 00:23:02.856 "low_priority_weight": 0, 00:23:02.856 "medium_priority_weight": 0, 00:23:02.856 "nvme_adminq_poll_period_us": 10000, 00:23:02.856 "nvme_error_stat": false, 00:23:02.856 "nvme_ioq_poll_period_us": 0, 00:23:02.856 "rdma_cm_event_timeout_ms": 0, 00:23:02.856 "rdma_max_cq_size": 0, 00:23:02.856 "rdma_srq_size": 0, 00:23:02.856 "reconnect_delay_sec": 0, 00:23:02.856 "timeout_admin_us": 0, 00:23:02.856 "timeout_us": 0, 00:23:02.856 "transport_ack_timeout": 0, 00:23:02.856 "transport_retry_count": 4, 00:23:02.856 "transport_tos": 0 00:23:02.856 } 00:23:02.856 }, 00:23:02.856 { 00:23:02.856 "method": "bdev_nvme_attach_controller", 00:23:02.856 "params": { 00:23:02.856 "adrfam": "IPv4", 00:23:02.856 "ctrlr_loss_timeout_sec": 0, 00:23:02.856 "ddgst": false, 00:23:02.856 "fast_io_fail_timeout_sec": 0, 00:23:02.856 "hdgst": false, 00:23:02.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:02.856 "name": "nvme0", 00:23:02.856 "prchk_guard": false, 00:23:02.856 "prchk_reftag": false, 00:23:02.856 "psk": "key0", 00:23:02.856 "reconnect_delay_sec": 0, 00:23:02.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.856 "traddr": "10.0.0.2", 00:23:02.856 "trsvcid": "4420", 00:23:02.856 "trtype": "TCP" 00:23:02.856 } 00:23:02.856 }, 00:23:02.856 { 00:23:02.856 "method": "bdev_nvme_set_hotplug", 00:23:02.856 "params": { 00:23:02.856 "enable": false, 00:23:02.856 "period_us": 100000 00:23:02.856 } 00:23:02.856 }, 00:23:02.856 { 00:23:02.856 "method": "bdev_enable_histogram", 00:23:02.856 "params": { 00:23:02.856 "enable": true, 00:23:02.856 "name": "nvme0n1" 00:23:02.856 } 00:23:02.856 }, 00:23:02.856 { 00:23:02.856 "method": "bdev_wait_for_examine" 00:23:02.856 } 00:23:02.856 ] 00:23:02.856 }, 00:23:02.856 { 00:23:02.856 "subsystem": "nbd", 00:23:02.856 "config": [] 00:23:02.856 } 00:23:02.856 ] 00:23:02.856 }' 00:23:02.856 [2024-05-15 02:23:50.782997] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:02.856 [2024-05-15 02:23:50.783094] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80639 ] 00:23:03.114 [2024-05-15 02:23:50.928242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.114 [2024-05-15 02:23:50.998099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.372 [2024-05-15 02:23:51.130927] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:03.938 02:23:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:03.938 02:23:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:03.938 02:23:51 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:03.938 02:23:51 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.196 02:23:52 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.196 02:23:52 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:04.196 Running I/O for 1 seconds... 00:23:05.576 00:23:05.576 Latency(us) 00:23:05.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.576 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:05.576 Verification LBA range: start 0x0 length 0x2000 00:23:05.576 nvme0n1 : 1.02 3894.11 15.21 0.00 0.00 32550.63 6404.65 26452.71 00:23:05.576 =================================================================================================================== 00:23:05.576 Total : 3894.11 15.21 0.00 0.00 32550.63 6404.65 26452.71 00:23:05.576 0 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:05.576 nvmf_trace.0 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 80639 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 80639 ']' 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 80639 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80639 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80639' 00:23:05.576 killing process with pid 80639 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 80639 00:23:05.576 Received shutdown signal, test time was about 1.000000 seconds 00:23:05.576 00:23:05.576 Latency(us) 00:23:05.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.576 =================================================================================================================== 00:23:05.576 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 80639 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:05.576 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:05.577 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:05.577 rmmod nvme_tcp 00:23:05.835 rmmod nvme_fabrics 00:23:05.835 rmmod nvme_keyring 00:23:05.835 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:05.835 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:05.835 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:05.835 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 80601 ']' 00:23:05.835 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 80601 00:23:05.835 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 80601 ']' 00:23:05.835 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 80601 00:23:05.835 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:05.835 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:05.835 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80601 00:23:05.835 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:05.835 killing process with pid 80601 00:23:05.835 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:05.835 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80601' 00:23:05.835 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 80601 00:23:05.835 [2024-05-15 02:23:53.660285] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:05.835 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 80601 00:23:06.092 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:06.092 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:06.092 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:06.092 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:06.092 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:06.092 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.092 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.092 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.092 02:23:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:06.092 02:23:53 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3RLjMnm26r /tmp/tmp.lPyGNlY60H /tmp/tmp.y9j5LVBeaS 00:23:06.092 ************************************ 00:23:06.092 END TEST nvmf_tls 00:23:06.092 ************************************ 00:23:06.092 00:23:06.092 real 1m24.069s 00:23:06.092 user 2m13.970s 00:23:06.092 sys 0m26.944s 00:23:06.092 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:06.092 02:23:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.092 02:23:53 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:06.093 02:23:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:06.093 02:23:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:06.093 02:23:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:06.093 ************************************ 00:23:06.093 START TEST nvmf_fips 00:23:06.093 ************************************ 00:23:06.093 02:23:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:06.093 * Looking for test storage... 00:23:06.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:06.093 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:06.094 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:06.352 Error setting digest 00:23:06.352 0032BEFA577F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:06.352 0032BEFA577F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:06.352 Cannot find device "nvmf_tgt_br" 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:06.352 Cannot find device "nvmf_tgt_br2" 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:06.352 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:06.353 Cannot find device "nvmf_tgt_br" 00:23:06.353 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:23:06.353 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:06.353 Cannot find device "nvmf_tgt_br2" 00:23:06.353 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:23:06.353 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:06.353 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:06.353 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:06.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:06.353 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:23:06.353 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:06.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:06.353 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:23:06.353 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:06.353 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:06.353 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:06.353 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:06.353 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:06.610 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:06.610 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:06.610 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:06.610 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:06.610 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:06.610 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:06.610 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:06.610 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:06.610 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:06.610 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:06.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:23:06.611 00:23:06.611 --- 10.0.0.2 ping statistics --- 00:23:06.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.611 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:06.611 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:06.611 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:23:06.611 00:23:06.611 --- 10.0.0.3 ping statistics --- 00:23:06.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.611 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:06.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:23:06.611 00:23:06.611 --- 10.0.0.1 ping statistics --- 00:23:06.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.611 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=80909 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 80909 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 80909 ']' 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:06.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:06.611 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:06.869 [2024-05-15 02:23:54.638467] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:06.869 [2024-05-15 02:23:54.638576] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.869 [2024-05-15 02:23:54.777883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.869 [2024-05-15 02:23:54.847004] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.869 [2024-05-15 02:23:54.847059] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.869 [2024-05-15 02:23:54.847072] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.869 [2024-05-15 02:23:54.847082] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.869 [2024-05-15 02:23:54.847090] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.869 [2024-05-15 02:23:54.847125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.127 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:07.127 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:07.127 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:07.127 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.127 02:23:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:07.127 02:23:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.127 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:07.127 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:07.127 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:07.127 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:07.127 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:07.127 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:07.127 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:07.127 02:23:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:07.386 [2024-05-15 02:23:55.228726] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.386 [2024-05-15 02:23:55.244663] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:07.386 [2024-05-15 02:23:55.244781] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:07.386 [2024-05-15 02:23:55.244972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.386 [2024-05-15 02:23:55.271919] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:07.386 malloc0 00:23:07.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.386 02:23:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.386 02:23:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=80943 00:23:07.386 02:23:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.386 02:23:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 80943 /var/tmp/bdevperf.sock 00:23:07.386 02:23:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 80943 ']' 00:23:07.386 02:23:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.386 02:23:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:07.386 02:23:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.386 02:23:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:07.386 02:23:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:07.386 [2024-05-15 02:23:55.376821] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:07.386 [2024-05-15 02:23:55.376913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80943 ] 00:23:07.644 [2024-05-15 02:23:55.517658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.644 [2024-05-15 02:23:55.586305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.578 02:23:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:08.578 02:23:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:08.578 02:23:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:08.578 [2024-05-15 02:23:56.561127] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.578 [2024-05-15 02:23:56.561231] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:08.837 TLSTESTn1 00:23:08.837 02:23:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:08.837 Running I/O for 10 seconds... 00:23:18.807 00:23:18.807 Latency(us) 00:23:18.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.807 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:18.807 Verification LBA range: start 0x0 length 0x2000 00:23:18.807 TLSTESTn1 : 10.02 3864.68 15.10 0.00 0.00 33053.82 7417.48 38368.35 00:23:18.807 =================================================================================================================== 00:23:18.807 Total : 3864.68 15.10 0.00 0.00 33053.82 7417.48 38368.35 00:23:18.807 0 00:23:18.807 02:24:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:18.807 02:24:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:18.807 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:23:18.807 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:23:18.807 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:18.807 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:18.807 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:18.807 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:18.808 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:18.808 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:18.808 nvmf_trace.0 00:23:19.067 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:23:19.067 02:24:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 80943 00:23:19.067 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 80943 ']' 00:23:19.067 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 80943 00:23:19.067 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:19.067 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:19.067 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80943 00:23:19.067 killing process with pid 80943 00:23:19.067 Received shutdown signal, test time was about 10.000000 seconds 00:23:19.067 00:23:19.067 Latency(us) 00:23:19.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.067 =================================================================================================================== 00:23:19.067 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:19.067 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:19.067 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:19.067 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80943' 00:23:19.067 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 80943 00:23:19.067 [2024-05-15 02:24:06.921411] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:19.067 02:24:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 80943 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.363 rmmod nvme_tcp 00:23:19.363 rmmod nvme_fabrics 00:23:19.363 rmmod nvme_keyring 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 80909 ']' 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 80909 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 80909 ']' 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 80909 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80909 00:23:19.363 killing process with pid 80909 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80909' 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 80909 00:23:19.363 [2024-05-15 02:24:07.241769] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:19.363 [2024-05-15 02:24:07.241810] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:19.363 02:24:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 80909 00:23:19.642 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:19.642 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:19.642 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:19.642 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.642 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:19.642 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.642 02:24:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.642 02:24:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.642 02:24:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:19.642 02:24:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:19.642 00:23:19.642 real 0m13.540s 00:23:19.642 user 0m18.765s 00:23:19.642 sys 0m5.445s 00:23:19.642 02:24:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:19.642 02:24:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:19.642 ************************************ 00:23:19.642 END TEST nvmf_fips 00:23:19.642 ************************************ 00:23:19.642 02:24:07 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:23:19.642 02:24:07 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:23:19.642 02:24:07 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:23:19.642 02:24:07 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.642 02:24:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:19.642 02:24:07 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:23:19.642 02:24:07 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:19.642 02:24:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:19.642 02:24:07 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:23:19.642 02:24:07 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:19.642 02:24:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:19.642 02:24:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:19.642 02:24:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:19.642 ************************************ 00:23:19.642 START TEST nvmf_multicontroller 00:23:19.642 ************************************ 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:19.642 * Looking for test storage... 00:23:19.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:19.642 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:19.900 Cannot find device "nvmf_tgt_br" 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:19.900 Cannot find device "nvmf_tgt_br2" 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:19.900 Cannot find device "nvmf_tgt_br" 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:19.900 Cannot find device "nvmf_tgt_br2" 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:19.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:19.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:19.900 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:20.158 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:20.158 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:20.158 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:20.158 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:20.158 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:20.158 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:20.158 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:20.158 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:20.158 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:20.158 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:20.158 02:24:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:20.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:23:20.158 00:23:20.158 --- 10.0.0.2 ping statistics --- 00:23:20.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.158 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:20.158 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:20.158 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:23:20.158 00:23:20.158 --- 10.0.0.3 ping statistics --- 00:23:20.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.158 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:20.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:23:20.158 00:23:20.158 --- 10.0.0.1 ping statistics --- 00:23:20.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.158 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=81228 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 81228 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 81228 ']' 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:20.158 02:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.158 [2024-05-15 02:24:08.101106] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:20.158 [2024-05-15 02:24:08.101205] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.416 [2024-05-15 02:24:08.243266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:20.416 [2024-05-15 02:24:08.307552] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.416 [2024-05-15 02:24:08.307644] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.416 [2024-05-15 02:24:08.307666] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.416 [2024-05-15 02:24:08.307684] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.416 [2024-05-15 02:24:08.307708] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.416 [2024-05-15 02:24:08.307895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.416 [2024-05-15 02:24:08.308632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.416 [2024-05-15 02:24:08.308644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.352 [2024-05-15 02:24:09.140474] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.352 Malloc0 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.352 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.352 [2024-05-15 02:24:09.194707] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:21.353 [2024-05-15 02:24:09.194995] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.353 [2024-05-15 02:24:09.202910] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.353 Malloc1 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=81276 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 81276 /var/tmp/bdevperf.sock 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 81276 ']' 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:21.353 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.611 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:21.611 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:23:21.611 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:21.611 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.611 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.870 NVMe0n1 00:23:21.870 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.870 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:21.870 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:21.870 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.870 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.870 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.870 1 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.871 2024/05/15 02:24:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:21.871 request: 00:23:21.871 { 00:23:21.871 "method": "bdev_nvme_attach_controller", 00:23:21.871 "params": { 00:23:21.871 "name": "NVMe0", 00:23:21.871 "trtype": "tcp", 00:23:21.871 "traddr": "10.0.0.2", 00:23:21.871 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:21.871 "hostaddr": "10.0.0.2", 00:23:21.871 "hostsvcid": "60000", 00:23:21.871 "adrfam": "ipv4", 00:23:21.871 "trsvcid": "4420", 00:23:21.871 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:23:21.871 } 00:23:21.871 } 00:23:21.871 Got JSON-RPC error response 00:23:21.871 GoRPCClient: error on JSON-RPC call 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.871 2024/05/15 02:24:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:21.871 request: 00:23:21.871 { 00:23:21.871 "method": "bdev_nvme_attach_controller", 00:23:21.871 "params": { 00:23:21.871 "name": "NVMe0", 00:23:21.871 "trtype": "tcp", 00:23:21.871 "traddr": "10.0.0.2", 00:23:21.871 "hostaddr": "10.0.0.2", 00:23:21.871 "hostsvcid": "60000", 00:23:21.871 "adrfam": "ipv4", 00:23:21.871 "trsvcid": "4420", 00:23:21.871 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:23:21.871 } 00:23:21.871 } 00:23:21.871 Got JSON-RPC error response 00:23:21.871 GoRPCClient: error on JSON-RPC call 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.871 2024/05/15 02:24:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:23:21.871 request: 00:23:21.871 { 00:23:21.871 "method": "bdev_nvme_attach_controller", 00:23:21.871 "params": { 00:23:21.871 "name": "NVMe0", 00:23:21.871 "trtype": "tcp", 00:23:21.871 "traddr": "10.0.0.2", 00:23:21.871 "hostaddr": "10.0.0.2", 00:23:21.871 "hostsvcid": "60000", 00:23:21.871 "adrfam": "ipv4", 00:23:21.871 "trsvcid": "4420", 00:23:21.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.871 "multipath": "disable" 00:23:21.871 } 00:23:21.871 } 00:23:21.871 Got JSON-RPC error response 00:23:21.871 GoRPCClient: error on JSON-RPC call 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.871 2024/05/15 02:24:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:21.871 request: 00:23:21.871 { 00:23:21.871 "method": "bdev_nvme_attach_controller", 00:23:21.871 "params": { 00:23:21.871 "name": "NVMe0", 00:23:21.871 "trtype": "tcp", 00:23:21.871 "traddr": "10.0.0.2", 00:23:21.871 "hostaddr": "10.0.0.2", 00:23:21.871 "hostsvcid": "60000", 00:23:21.871 "adrfam": "ipv4", 00:23:21.871 "trsvcid": "4420", 00:23:21.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.871 "multipath": "failover" 00:23:21.871 } 00:23:21.871 } 00:23:21.871 Got JSON-RPC error response 00:23:21.871 GoRPCClient: error on JSON-RPC call 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.871 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:21.871 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.872 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.872 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.872 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:21.872 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.872 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:22.130 00:23:22.130 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.130 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:22.130 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:22.130 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.130 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:22.130 02:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.130 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:22.130 02:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:23.504 0 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 81276 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 81276 ']' 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 81276 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81276 00:23:23.504 killing process with pid 81276 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81276' 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 81276 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 81276 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:23:23.504 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:23:23.504 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:23:23.504 [2024-05-15 02:24:09.308579] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:23.504 [2024-05-15 02:24:09.308709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81276 ] 00:23:23.504 [2024-05-15 02:24:09.447195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.504 [2024-05-15 02:24:09.516969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.504 [2024-05-15 02:24:09.913409] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name c7fdaf95-c309-48b6-85f8-9a82136151af already exists 00:23:23.504 [2024-05-15 02:24:09.913496] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:c7fdaf95-c309-48b6-85f8-9a82136151af alias for bdev NVMe1n1 00:23:23.504 [2024-05-15 02:24:09.913520] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:23.504 Running I/O for 1 seconds... 00:23:23.504 00:23:23.504 Latency(us) 00:23:23.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.504 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:23.504 NVMe0n1 : 1.00 18612.58 72.71 0.00 0.00 6856.18 2100.13 12571.00 00:23:23.504 =================================================================================================================== 00:23:23.505 Total : 18612.58 72.71 0.00 0.00 6856.18 2100.13 12571.00 00:23:23.505 Received shutdown signal, test time was about 1.000000 seconds 00:23:23.505 00:23:23.505 Latency(us) 00:23:23.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.505 =================================================================================================================== 00:23:23.505 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.505 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:23.505 rmmod nvme_tcp 00:23:23.505 rmmod nvme_fabrics 00:23:23.505 rmmod nvme_keyring 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 81228 ']' 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 81228 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 81228 ']' 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 81228 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81228 00:23:23.505 killing process with pid 81228 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81228' 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 81228 00:23:23.505 [2024-05-15 02:24:11.490502] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:23.505 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 81228 00:23:23.763 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:23.763 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:23.763 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:23.763 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:23.763 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:23.763 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.763 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.763 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.763 02:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:23.763 00:23:23.763 real 0m4.183s 00:23:23.763 user 0m12.512s 00:23:23.763 sys 0m0.962s 00:23:23.763 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:23.763 ************************************ 00:23:23.763 END TEST nvmf_multicontroller 00:23:23.763 ************************************ 00:23:23.763 02:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.020 02:24:11 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:24.020 02:24:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:24.020 02:24:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:24.020 02:24:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:24.020 ************************************ 00:23:24.020 START TEST nvmf_aer 00:23:24.020 ************************************ 00:23:24.020 02:24:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:24.020 * Looking for test storage... 00:23:24.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:24.020 02:24:11 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:24.020 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:24.021 Cannot find device "nvmf_tgt_br" 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:24.021 Cannot find device "nvmf_tgt_br2" 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:24.021 Cannot find device "nvmf_tgt_br" 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:24.021 Cannot find device "nvmf_tgt_br2" 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:24.021 02:24:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:24.021 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:24.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:24.021 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:23:24.021 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:24.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:24.021 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:23:24.021 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:24.021 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:24.021 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:24.021 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:24.021 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:24.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:23:24.279 00:23:24.279 --- 10.0.0.2 ping statistics --- 00:23:24.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.279 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:24.279 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:24.279 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:23:24.279 00:23:24.279 --- 10.0.0.3 ping statistics --- 00:23:24.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.279 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:24.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:23:24.279 00:23:24.279 --- 10.0.0.1 ping statistics --- 00:23:24.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.279 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:24.279 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=81486 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 81486 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 81486 ']' 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:24.280 02:24:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.538 [2024-05-15 02:24:12.295105] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:24.538 [2024-05-15 02:24:12.296146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.538 [2024-05-15 02:24:12.437098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.538 [2024-05-15 02:24:12.510339] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.538 [2024-05-15 02:24:12.510709] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.538 [2024-05-15 02:24:12.510754] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.538 [2024-05-15 02:24:12.510773] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.538 [2024-05-15 02:24:12.510790] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.538 [2024-05-15 02:24:12.510964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.538 [2024-05-15 02:24:12.511138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.538 [2024-05-15 02:24:12.511782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.538 [2024-05-15 02:24:12.512311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.472 [2024-05-15 02:24:13.321118] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.472 Malloc0 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.472 [2024-05-15 02:24:13.392379] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:25.472 [2024-05-15 02:24:13.392636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.472 [ 00:23:25.472 { 00:23:25.472 "allow_any_host": true, 00:23:25.472 "hosts": [], 00:23:25.472 "listen_addresses": [], 00:23:25.472 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:25.472 "subtype": "Discovery" 00:23:25.472 }, 00:23:25.472 { 00:23:25.472 "allow_any_host": true, 00:23:25.472 "hosts": [], 00:23:25.472 "listen_addresses": [ 00:23:25.472 { 00:23:25.472 "adrfam": "IPv4", 00:23:25.472 "traddr": "10.0.0.2", 00:23:25.472 "trsvcid": "4420", 00:23:25.472 "trtype": "TCP" 00:23:25.472 } 00:23:25.472 ], 00:23:25.472 "max_cntlid": 65519, 00:23:25.472 "max_namespaces": 2, 00:23:25.472 "min_cntlid": 1, 00:23:25.472 "model_number": "SPDK bdev Controller", 00:23:25.472 "namespaces": [ 00:23:25.472 { 00:23:25.472 "bdev_name": "Malloc0", 00:23:25.472 "name": "Malloc0", 00:23:25.472 "nguid": "8C0D5479408248F9B17C74E9E013BE41", 00:23:25.472 "nsid": 1, 00:23:25.472 "uuid": "8c0d5479-4082-48f9-b17c-74e9e013be41" 00:23:25.472 } 00:23:25.472 ], 00:23:25.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.472 "serial_number": "SPDK00000000000001", 00:23:25.472 "subtype": "NVMe" 00:23:25.472 } 00:23:25.472 ] 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=81534 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:23:25.472 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.730 Malloc1 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.730 Asynchronous Event Request test 00:23:25.730 Attaching to 10.0.0.2 00:23:25.730 Attached to 10.0.0.2 00:23:25.730 Registering asynchronous event callbacks... 00:23:25.730 Starting namespace attribute notice tests for all controllers... 00:23:25.730 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:25.730 aer_cb - Changed Namespace 00:23:25.730 Cleaning up... 00:23:25.730 [ 00:23:25.730 { 00:23:25.730 "allow_any_host": true, 00:23:25.730 "hosts": [], 00:23:25.730 "listen_addresses": [], 00:23:25.730 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:25.730 "subtype": "Discovery" 00:23:25.730 }, 00:23:25.730 { 00:23:25.730 "allow_any_host": true, 00:23:25.730 "hosts": [], 00:23:25.730 "listen_addresses": [ 00:23:25.730 { 00:23:25.730 "adrfam": "IPv4", 00:23:25.730 "traddr": "10.0.0.2", 00:23:25.730 "trsvcid": "4420", 00:23:25.730 "trtype": "TCP" 00:23:25.730 } 00:23:25.730 ], 00:23:25.730 "max_cntlid": 65519, 00:23:25.730 "max_namespaces": 2, 00:23:25.730 "min_cntlid": 1, 00:23:25.730 "model_number": "SPDK bdev Controller", 00:23:25.730 "namespaces": [ 00:23:25.730 { 00:23:25.730 "bdev_name": "Malloc0", 00:23:25.730 "name": "Malloc0", 00:23:25.730 "nguid": "8C0D5479408248F9B17C74E9E013BE41", 00:23:25.730 "nsid": 1, 00:23:25.730 "uuid": "8c0d5479-4082-48f9-b17c-74e9e013be41" 00:23:25.730 }, 00:23:25.730 { 00:23:25.730 "bdev_name": "Malloc1", 00:23:25.730 "name": "Malloc1", 00:23:25.730 "nguid": "B958A15916F34FAF9473DDF2AE116403", 00:23:25.730 "nsid": 2, 00:23:25.730 "uuid": "b958a159-16f3-4faf-9473-ddf2ae116403" 00:23:25.730 } 00:23:25.730 ], 00:23:25.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.730 "serial_number": "SPDK00000000000001", 00:23:25.730 "subtype": "NVMe" 00:23:25.730 } 00:23:25.730 ] 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 81534 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.730 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:25.987 rmmod nvme_tcp 00:23:25.987 rmmod nvme_fabrics 00:23:25.987 rmmod nvme_keyring 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 81486 ']' 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 81486 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 81486 ']' 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 81486 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81486 00:23:25.987 killing process with pid 81486 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81486' 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 81486 00:23:25.987 [2024-05-15 02:24:13.853778] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:25.987 02:24:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 81486 00:23:26.245 02:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:26.245 02:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:26.245 02:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:26.245 02:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:26.245 02:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:26.245 02:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.245 02:24:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:26.245 02:24:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.245 02:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:26.245 ************************************ 00:23:26.245 END TEST nvmf_aer 00:23:26.245 ************************************ 00:23:26.245 00:23:26.245 real 0m2.293s 00:23:26.245 user 0m6.319s 00:23:26.245 sys 0m0.577s 00:23:26.245 02:24:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:26.245 02:24:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.245 02:24:14 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:26.245 02:24:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:26.245 02:24:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:26.245 02:24:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:26.245 ************************************ 00:23:26.245 START TEST nvmf_async_init 00:23:26.245 ************************************ 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:26.245 * Looking for test storage... 00:23:26.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=fd1d27ad58794cdaa5a62f0cf9f0eb6f 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:26.245 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:26.246 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:26.246 Cannot find device "nvmf_tgt_br" 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:26.503 Cannot find device "nvmf_tgt_br2" 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:26.503 Cannot find device "nvmf_tgt_br" 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:26.503 Cannot find device "nvmf_tgt_br2" 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:26.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:26.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:26.503 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:26.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:23:26.763 00:23:26.763 --- 10.0.0.2 ping statistics --- 00:23:26.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.763 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:26.763 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:26.763 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:23:26.763 00:23:26.763 --- 10.0.0.3 ping statistics --- 00:23:26.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.763 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:26.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:23:26.763 00:23:26.763 --- 10.0.0.1 ping statistics --- 00:23:26.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.763 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=81697 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 81697 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 81697 ']' 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:26.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.763 02:24:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.764 02:24:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:26.764 02:24:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.764 [2024-05-15 02:24:14.633230] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:26.764 [2024-05-15 02:24:14.633585] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.764 [2024-05-15 02:24:14.772157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.021 [2024-05-15 02:24:14.831825] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.021 [2024-05-15 02:24:14.831880] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.021 [2024-05-15 02:24:14.831893] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.021 [2024-05-15 02:24:14.831902] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.021 [2024-05-15 02:24:14.831909] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.021 [2024-05-15 02:24:14.831936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.627 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:27.627 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:23:27.627 02:24:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:27.627 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.627 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:27.627 02:24:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.627 02:24:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:27.627 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.627 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:27.627 [2024-05-15 02:24:15.640575] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.885 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.885 02:24:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:27.885 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.885 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:27.885 null0 00:23:27.885 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.885 02:24:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:27.885 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.885 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:27.885 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.886 02:24:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:27.886 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.886 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:27.886 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.886 02:24:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g fd1d27ad58794cdaa5a62f0cf9f0eb6f 00:23:27.886 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.886 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:27.886 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.886 02:24:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:27.886 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.886 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:27.886 [2024-05-15 02:24:15.688558] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:27.886 [2024-05-15 02:24:15.688870] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.886 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.886 02:24:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:27.886 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.886 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.144 nvme0n1 00:23:28.144 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.144 02:24:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:28.144 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.144 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.144 [ 00:23:28.144 { 00:23:28.144 "aliases": [ 00:23:28.144 "fd1d27ad-5879-4cda-a5a6-2f0cf9f0eb6f" 00:23:28.144 ], 00:23:28.144 "assigned_rate_limits": { 00:23:28.144 "r_mbytes_per_sec": 0, 00:23:28.144 "rw_ios_per_sec": 0, 00:23:28.145 "rw_mbytes_per_sec": 0, 00:23:28.145 "w_mbytes_per_sec": 0 00:23:28.145 }, 00:23:28.145 "block_size": 512, 00:23:28.145 "claimed": false, 00:23:28.145 "driver_specific": { 00:23:28.145 "mp_policy": "active_passive", 00:23:28.145 "nvme": [ 00:23:28.145 { 00:23:28.145 "ctrlr_data": { 00:23:28.145 "ana_reporting": false, 00:23:28.145 "cntlid": 1, 00:23:28.145 "firmware_revision": "24.05", 00:23:28.145 "model_number": "SPDK bdev Controller", 00:23:28.145 "multi_ctrlr": true, 00:23:28.145 "oacs": { 00:23:28.145 "firmware": 0, 00:23:28.145 "format": 0, 00:23:28.145 "ns_manage": 0, 00:23:28.145 "security": 0 00:23:28.145 }, 00:23:28.145 "serial_number": "00000000000000000000", 00:23:28.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:28.145 "vendor_id": "0x8086" 00:23:28.145 }, 00:23:28.145 "ns_data": { 00:23:28.145 "can_share": true, 00:23:28.145 "id": 1 00:23:28.145 }, 00:23:28.145 "trid": { 00:23:28.145 "adrfam": "IPv4", 00:23:28.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:28.145 "traddr": "10.0.0.2", 00:23:28.145 "trsvcid": "4420", 00:23:28.145 "trtype": "TCP" 00:23:28.145 }, 00:23:28.145 "vs": { 00:23:28.145 "nvme_version": "1.3" 00:23:28.145 } 00:23:28.145 } 00:23:28.145 ] 00:23:28.145 }, 00:23:28.145 "memory_domains": [ 00:23:28.145 { 00:23:28.145 "dma_device_id": "system", 00:23:28.145 "dma_device_type": 1 00:23:28.145 } 00:23:28.145 ], 00:23:28.145 "name": "nvme0n1", 00:23:28.145 "num_blocks": 2097152, 00:23:28.145 "product_name": "NVMe disk", 00:23:28.145 "supported_io_types": { 00:23:28.145 "abort": true, 00:23:28.145 "compare": true, 00:23:28.145 "compare_and_write": true, 00:23:28.145 "flush": true, 00:23:28.145 "nvme_admin": true, 00:23:28.145 "nvme_io": true, 00:23:28.145 "read": true, 00:23:28.145 "reset": true, 00:23:28.145 "unmap": false, 00:23:28.145 "write": true, 00:23:28.145 "write_zeroes": true 00:23:28.145 }, 00:23:28.145 "uuid": "fd1d27ad-5879-4cda-a5a6-2f0cf9f0eb6f", 00:23:28.145 "zoned": false 00:23:28.145 } 00:23:28.145 ] 00:23:28.145 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.145 02:24:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:28.145 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.145 02:24:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.145 [2024-05-15 02:24:15.960671] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.145 [2024-05-15 02:24:15.960981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa67f70 (9): Bad file descriptor 00:23:28.145 [2024-05-15 02:24:16.103581] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:28.145 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.145 02:24:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:28.145 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.145 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.145 [ 00:23:28.145 { 00:23:28.145 "aliases": [ 00:23:28.145 "fd1d27ad-5879-4cda-a5a6-2f0cf9f0eb6f" 00:23:28.145 ], 00:23:28.145 "assigned_rate_limits": { 00:23:28.145 "r_mbytes_per_sec": 0, 00:23:28.145 "rw_ios_per_sec": 0, 00:23:28.145 "rw_mbytes_per_sec": 0, 00:23:28.145 "w_mbytes_per_sec": 0 00:23:28.145 }, 00:23:28.145 "block_size": 512, 00:23:28.145 "claimed": false, 00:23:28.145 "driver_specific": { 00:23:28.145 "mp_policy": "active_passive", 00:23:28.145 "nvme": [ 00:23:28.145 { 00:23:28.145 "ctrlr_data": { 00:23:28.145 "ana_reporting": false, 00:23:28.145 "cntlid": 2, 00:23:28.145 "firmware_revision": "24.05", 00:23:28.145 "model_number": "SPDK bdev Controller", 00:23:28.145 "multi_ctrlr": true, 00:23:28.145 "oacs": { 00:23:28.145 "firmware": 0, 00:23:28.145 "format": 0, 00:23:28.145 "ns_manage": 0, 00:23:28.145 "security": 0 00:23:28.145 }, 00:23:28.145 "serial_number": "00000000000000000000", 00:23:28.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:28.145 "vendor_id": "0x8086" 00:23:28.145 }, 00:23:28.145 "ns_data": { 00:23:28.145 "can_share": true, 00:23:28.145 "id": 1 00:23:28.145 }, 00:23:28.145 "trid": { 00:23:28.145 "adrfam": "IPv4", 00:23:28.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:28.145 "traddr": "10.0.0.2", 00:23:28.145 "trsvcid": "4420", 00:23:28.145 "trtype": "TCP" 00:23:28.145 }, 00:23:28.145 "vs": { 00:23:28.145 "nvme_version": "1.3" 00:23:28.145 } 00:23:28.145 } 00:23:28.145 ] 00:23:28.145 }, 00:23:28.145 "memory_domains": [ 00:23:28.145 { 00:23:28.145 "dma_device_id": "system", 00:23:28.145 "dma_device_type": 1 00:23:28.145 } 00:23:28.145 ], 00:23:28.145 "name": "nvme0n1", 00:23:28.145 "num_blocks": 2097152, 00:23:28.145 "product_name": "NVMe disk", 00:23:28.145 "supported_io_types": { 00:23:28.145 "abort": true, 00:23:28.145 "compare": true, 00:23:28.145 "compare_and_write": true, 00:23:28.145 "flush": true, 00:23:28.145 "nvme_admin": true, 00:23:28.145 "nvme_io": true, 00:23:28.145 "read": true, 00:23:28.145 "reset": true, 00:23:28.145 "unmap": false, 00:23:28.145 "write": true, 00:23:28.145 "write_zeroes": true 00:23:28.145 }, 00:23:28.145 "uuid": "fd1d27ad-5879-4cda-a5a6-2f0cf9f0eb6f", 00:23:28.145 "zoned": false 00:23:28.145 } 00:23:28.145 ] 00:23:28.145 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.145 02:24:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.145 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.145 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.145 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.145 02:24:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.2cdJoOYglm 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.2cdJoOYglm 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.403 [2024-05-15 02:24:16.176860] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:28.403 [2024-05-15 02:24:16.177047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2cdJoOYglm 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.403 [2024-05-15 02:24:16.184860] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2cdJoOYglm 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.403 [2024-05-15 02:24:16.192848] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.403 [2024-05-15 02:24:16.192921] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:28.403 nvme0n1 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.403 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.403 [ 00:23:28.403 { 00:23:28.403 "aliases": [ 00:23:28.403 "fd1d27ad-5879-4cda-a5a6-2f0cf9f0eb6f" 00:23:28.403 ], 00:23:28.403 "assigned_rate_limits": { 00:23:28.403 "r_mbytes_per_sec": 0, 00:23:28.403 "rw_ios_per_sec": 0, 00:23:28.403 "rw_mbytes_per_sec": 0, 00:23:28.403 "w_mbytes_per_sec": 0 00:23:28.403 }, 00:23:28.403 "block_size": 512, 00:23:28.403 "claimed": false, 00:23:28.403 "driver_specific": { 00:23:28.403 "mp_policy": "active_passive", 00:23:28.403 "nvme": [ 00:23:28.403 { 00:23:28.403 "ctrlr_data": { 00:23:28.403 "ana_reporting": false, 00:23:28.403 "cntlid": 3, 00:23:28.403 "firmware_revision": "24.05", 00:23:28.403 "model_number": "SPDK bdev Controller", 00:23:28.403 "multi_ctrlr": true, 00:23:28.403 "oacs": { 00:23:28.403 "firmware": 0, 00:23:28.403 "format": 0, 00:23:28.403 "ns_manage": 0, 00:23:28.403 "security": 0 00:23:28.403 }, 00:23:28.403 "serial_number": "00000000000000000000", 00:23:28.403 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:28.403 "vendor_id": "0x8086" 00:23:28.403 }, 00:23:28.403 "ns_data": { 00:23:28.403 "can_share": true, 00:23:28.403 "id": 1 00:23:28.403 }, 00:23:28.403 "trid": { 00:23:28.403 "adrfam": "IPv4", 00:23:28.403 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:28.403 "traddr": "10.0.0.2", 00:23:28.403 "trsvcid": "4421", 00:23:28.403 "trtype": "TCP" 00:23:28.403 }, 00:23:28.403 "vs": { 00:23:28.403 "nvme_version": "1.3" 00:23:28.403 } 00:23:28.403 } 00:23:28.403 ] 00:23:28.403 }, 00:23:28.403 "memory_domains": [ 00:23:28.403 { 00:23:28.403 "dma_device_id": "system", 00:23:28.403 "dma_device_type": 1 00:23:28.403 } 00:23:28.403 ], 00:23:28.403 "name": "nvme0n1", 00:23:28.403 "num_blocks": 2097152, 00:23:28.403 "product_name": "NVMe disk", 00:23:28.404 "supported_io_types": { 00:23:28.404 "abort": true, 00:23:28.404 "compare": true, 00:23:28.404 "compare_and_write": true, 00:23:28.404 "flush": true, 00:23:28.404 "nvme_admin": true, 00:23:28.404 "nvme_io": true, 00:23:28.404 "read": true, 00:23:28.404 "reset": true, 00:23:28.404 "unmap": false, 00:23:28.404 "write": true, 00:23:28.404 "write_zeroes": true 00:23:28.404 }, 00:23:28.404 "uuid": "fd1d27ad-5879-4cda-a5a6-2f0cf9f0eb6f", 00:23:28.404 "zoned": false 00:23:28.404 } 00:23:28.404 ] 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.2cdJoOYglm 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:28.404 rmmod nvme_tcp 00:23:28.404 rmmod nvme_fabrics 00:23:28.404 rmmod nvme_keyring 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 81697 ']' 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 81697 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 81697 ']' 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 81697 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.404 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81697 00:23:28.661 killing process with pid 81697 00:23:28.661 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:28.661 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:28.661 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81697' 00:23:28.661 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 81697 00:23:28.661 [2024-05-15 02:24:16.433029] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:28.661 [2024-05-15 02:24:16.433067] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:28.661 [2024-05-15 02:24:16.433079] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:28.661 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 81697 00:23:28.661 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:28.661 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:28.661 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:28.662 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:28.662 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:28.662 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.662 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:28.662 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.662 02:24:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:28.662 ************************************ 00:23:28.662 END TEST nvmf_async_init 00:23:28.662 ************************************ 00:23:28.662 00:23:28.662 real 0m2.521s 00:23:28.662 user 0m2.408s 00:23:28.662 sys 0m0.536s 00:23:28.662 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:28.662 02:24:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.920 02:24:16 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:28.920 02:24:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:28.920 02:24:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:28.920 02:24:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:28.920 ************************************ 00:23:28.921 START TEST dma 00:23:28.921 ************************************ 00:23:28.921 02:24:16 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:28.921 * Looking for test storage... 00:23:28.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:28.921 02:24:16 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:28.921 02:24:16 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.921 02:24:16 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.921 02:24:16 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.921 02:24:16 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.921 02:24:16 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.921 02:24:16 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.921 02:24:16 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:28.921 02:24:16 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:28.921 02:24:16 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:28.921 02:24:16 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:28.921 02:24:16 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:28.921 00:23:28.921 real 0m0.096s 00:23:28.921 user 0m0.038s 00:23:28.921 sys 0m0.065s 00:23:28.921 02:24:16 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:28.921 02:24:16 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:28.921 ************************************ 00:23:28.921 END TEST dma 00:23:28.921 ************************************ 00:23:28.921 02:24:16 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:28.921 02:24:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:28.921 02:24:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:28.921 02:24:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:28.921 ************************************ 00:23:28.921 START TEST nvmf_identify 00:23:28.921 ************************************ 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:28.921 * Looking for test storage... 00:23:28.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:28.921 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:29.180 Cannot find device "nvmf_tgt_br" 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:29.180 Cannot find device "nvmf_tgt_br2" 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:29.180 Cannot find device "nvmf_tgt_br" 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:23:29.180 02:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:29.180 Cannot find device "nvmf_tgt_br2" 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:29.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:29.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:29.180 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:29.438 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:29.438 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:29.438 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:29.438 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:29.438 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:29.438 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:29.438 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:29.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:23:29.439 00:23:29.439 --- 10.0.0.2 ping statistics --- 00:23:29.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.439 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:29.439 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:29.439 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:23:29.439 00:23:29.439 --- 10.0.0.3 ping statistics --- 00:23:29.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.439 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:29.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:23:29.439 00:23:29.439 --- 10.0.0.1 ping statistics --- 00:23:29.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.439 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=81957 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 81957 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 81957 ']' 00:23:29.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:29.439 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.439 [2024-05-15 02:24:17.352085] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:29.439 [2024-05-15 02:24:17.352381] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.704 [2024-05-15 02:24:17.490849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:29.704 [2024-05-15 02:24:17.552581] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.704 [2024-05-15 02:24:17.552901] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.704 [2024-05-15 02:24:17.553039] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.704 [2024-05-15 02:24:17.553053] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.704 [2024-05-15 02:24:17.553061] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.704 [2024-05-15 02:24:17.553161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.704 [2024-05-15 02:24:17.553557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.704 [2024-05-15 02:24:17.553887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.704 [2024-05-15 02:24:17.553893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.704 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:29.704 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:23:29.704 02:24:17 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:29.704 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.704 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.704 [2024-05-15 02:24:17.644963] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.704 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.704 02:24:17 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:29.704 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.704 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.704 02:24:17 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:29.704 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.704 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.992 Malloc0 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.992 [2024-05-15 02:24:17.746846] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:29.992 [2024-05-15 02:24:17.747593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.992 [ 00:23:29.992 { 00:23:29.992 "allow_any_host": true, 00:23:29.992 "hosts": [], 00:23:29.992 "listen_addresses": [ 00:23:29.992 { 00:23:29.992 "adrfam": "IPv4", 00:23:29.992 "traddr": "10.0.0.2", 00:23:29.992 "trsvcid": "4420", 00:23:29.992 "trtype": "TCP" 00:23:29.992 } 00:23:29.992 ], 00:23:29.992 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:29.992 "subtype": "Discovery" 00:23:29.992 }, 00:23:29.992 { 00:23:29.992 "allow_any_host": true, 00:23:29.992 "hosts": [], 00:23:29.992 "listen_addresses": [ 00:23:29.992 { 00:23:29.992 "adrfam": "IPv4", 00:23:29.992 "traddr": "10.0.0.2", 00:23:29.992 "trsvcid": "4420", 00:23:29.992 "trtype": "TCP" 00:23:29.992 } 00:23:29.992 ], 00:23:29.992 "max_cntlid": 65519, 00:23:29.992 "max_namespaces": 32, 00:23:29.992 "min_cntlid": 1, 00:23:29.992 "model_number": "SPDK bdev Controller", 00:23:29.992 "namespaces": [ 00:23:29.992 { 00:23:29.992 "bdev_name": "Malloc0", 00:23:29.992 "eui64": "ABCDEF0123456789", 00:23:29.992 "name": "Malloc0", 00:23:29.992 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:29.992 "nsid": 1, 00:23:29.992 "uuid": "56917e30-d561-4ea8-a605-933d956b0363" 00:23:29.992 } 00:23:29.992 ], 00:23:29.992 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.992 "serial_number": "SPDK00000000000001", 00:23:29.992 "subtype": "NVMe" 00:23:29.992 } 00:23:29.992 ] 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.992 02:24:17 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:29.992 [2024-05-15 02:24:17.814964] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:29.992 [2024-05-15 02:24:17.815233] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81992 ] 00:23:29.992 [2024-05-15 02:24:17.954812] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:29.992 [2024-05-15 02:24:17.954902] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:29.992 [2024-05-15 02:24:17.954910] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:29.992 [2024-05-15 02:24:17.954926] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:29.992 [2024-05-15 02:24:17.954937] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:29.992 [2024-05-15 02:24:17.955101] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:29.992 [2024-05-15 02:24:17.955156] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x242b280 0 00:23:29.992 [2024-05-15 02:24:17.959415] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:29.992 [2024-05-15 02:24:17.959445] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:29.992 [2024-05-15 02:24:17.959452] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:29.992 [2024-05-15 02:24:17.959456] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:29.992 [2024-05-15 02:24:17.959506] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.992 [2024-05-15 02:24:17.959515] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.992 [2024-05-15 02:24:17.959519] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242b280) 00:23:29.992 [2024-05-15 02:24:17.959538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:29.992 [2024-05-15 02:24:17.959574] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473950, cid 0, qid 0 00:23:29.992 [2024-05-15 02:24:17.967407] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.992 [2024-05-15 02:24:17.967435] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.992 [2024-05-15 02:24:17.967442] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.992 [2024-05-15 02:24:17.967448] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473950) on tqpair=0x242b280 00:23:29.992 [2024-05-15 02:24:17.967463] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:29.992 [2024-05-15 02:24:17.967474] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:29.992 [2024-05-15 02:24:17.967481] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:29.992 [2024-05-15 02:24:17.967500] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.992 [2024-05-15 02:24:17.967506] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.992 [2024-05-15 02:24:17.967511] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242b280) 00:23:29.992 [2024-05-15 02:24:17.967525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.992 [2024-05-15 02:24:17.967557] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473950, cid 0, qid 0 00:23:29.992 [2024-05-15 02:24:17.967689] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.992 [2024-05-15 02:24:17.967697] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.992 [2024-05-15 02:24:17.967701] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.992 [2024-05-15 02:24:17.967705] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473950) on tqpair=0x242b280 00:23:29.992 [2024-05-15 02:24:17.967712] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:29.992 [2024-05-15 02:24:17.967721] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:29.992 [2024-05-15 02:24:17.967729] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.967734] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.967738] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242b280) 00:23:29.993 [2024-05-15 02:24:17.967747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.993 [2024-05-15 02:24:17.967768] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473950, cid 0, qid 0 00:23:29.993 [2024-05-15 02:24:17.967862] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.993 [2024-05-15 02:24:17.967869] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.993 [2024-05-15 02:24:17.967873] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.967877] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473950) on tqpair=0x242b280 00:23:29.993 [2024-05-15 02:24:17.967885] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:29.993 [2024-05-15 02:24:17.967894] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:29.993 [2024-05-15 02:24:17.967902] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.967907] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.967911] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242b280) 00:23:29.993 [2024-05-15 02:24:17.967918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.993 [2024-05-15 02:24:17.967939] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473950, cid 0, qid 0 00:23:29.993 [2024-05-15 02:24:17.968034] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.993 [2024-05-15 02:24:17.968042] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.993 [2024-05-15 02:24:17.968045] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968050] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473950) on tqpair=0x242b280 00:23:29.993 [2024-05-15 02:24:17.968057] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:29.993 [2024-05-15 02:24:17.968068] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968073] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968077] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242b280) 00:23:29.993 [2024-05-15 02:24:17.968085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.993 [2024-05-15 02:24:17.968104] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473950, cid 0, qid 0 00:23:29.993 [2024-05-15 02:24:17.968196] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.993 [2024-05-15 02:24:17.968205] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.993 [2024-05-15 02:24:17.968209] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968213] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473950) on tqpair=0x242b280 00:23:29.993 [2024-05-15 02:24:17.968220] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:29.993 [2024-05-15 02:24:17.968226] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:29.993 [2024-05-15 02:24:17.968234] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:29.993 [2024-05-15 02:24:17.968340] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:29.993 [2024-05-15 02:24:17.968355] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:29.993 [2024-05-15 02:24:17.968366] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968371] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968375] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242b280) 00:23:29.993 [2024-05-15 02:24:17.968383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.993 [2024-05-15 02:24:17.968431] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473950, cid 0, qid 0 00:23:29.993 [2024-05-15 02:24:17.968529] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.993 [2024-05-15 02:24:17.968536] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.993 [2024-05-15 02:24:17.968540] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968545] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473950) on tqpair=0x242b280 00:23:29.993 [2024-05-15 02:24:17.968552] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:29.993 [2024-05-15 02:24:17.968563] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968569] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968573] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242b280) 00:23:29.993 [2024-05-15 02:24:17.968581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.993 [2024-05-15 02:24:17.968600] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473950, cid 0, qid 0 00:23:29.993 [2024-05-15 02:24:17.968693] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.993 [2024-05-15 02:24:17.968700] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.993 [2024-05-15 02:24:17.968703] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968708] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473950) on tqpair=0x242b280 00:23:29.993 [2024-05-15 02:24:17.968714] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:29.993 [2024-05-15 02:24:17.968720] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:29.993 [2024-05-15 02:24:17.968728] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:29.993 [2024-05-15 02:24:17.968745] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:29.993 [2024-05-15 02:24:17.968757] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968762] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242b280) 00:23:29.993 [2024-05-15 02:24:17.968770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.993 [2024-05-15 02:24:17.968791] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473950, cid 0, qid 0 00:23:29.993 [2024-05-15 02:24:17.968933] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.993 [2024-05-15 02:24:17.968941] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.993 [2024-05-15 02:24:17.968945] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968949] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242b280): datao=0, datal=4096, cccid=0 00:23:29.993 [2024-05-15 02:24:17.968955] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2473950) on tqpair(0x242b280): expected_datao=0, payload_size=4096 00:23:29.993 [2024-05-15 02:24:17.968960] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968969] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968974] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968984] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.993 [2024-05-15 02:24:17.968991] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.993 [2024-05-15 02:24:17.968995] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.968999] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473950) on tqpair=0x242b280 00:23:29.993 [2024-05-15 02:24:17.969010] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:29.993 [2024-05-15 02:24:17.969016] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:29.993 [2024-05-15 02:24:17.969022] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:29.993 [2024-05-15 02:24:17.969027] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:29.993 [2024-05-15 02:24:17.969033] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:29.993 [2024-05-15 02:24:17.969038] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:29.993 [2024-05-15 02:24:17.969048] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:29.993 [2024-05-15 02:24:17.969061] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.969066] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.969070] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242b280) 00:23:29.993 [2024-05-15 02:24:17.969079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:29.993 [2024-05-15 02:24:17.969101] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473950, cid 0, qid 0 00:23:29.993 [2024-05-15 02:24:17.969202] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.993 [2024-05-15 02:24:17.969209] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.993 [2024-05-15 02:24:17.969213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.969217] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473950) on tqpair=0x242b280 00:23:29.993 [2024-05-15 02:24:17.969227] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.969231] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.969236] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242b280) 00:23:29.993 [2024-05-15 02:24:17.969243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.993 [2024-05-15 02:24:17.969250] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.969255] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.969259] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x242b280) 00:23:29.993 [2024-05-15 02:24:17.969265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.993 [2024-05-15 02:24:17.969272] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.969276] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.993 [2024-05-15 02:24:17.969280] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x242b280) 00:23:29.993 [2024-05-15 02:24:17.969286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.993 [2024-05-15 02:24:17.969293] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.969298] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.969302] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:29.994 [2024-05-15 02:24:17.969308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.994 [2024-05-15 02:24:17.969314] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:29.994 [2024-05-15 02:24:17.969327] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:29.994 [2024-05-15 02:24:17.969335] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.969340] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242b280) 00:23:29.994 [2024-05-15 02:24:17.969348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.994 [2024-05-15 02:24:17.969369] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473950, cid 0, qid 0 00:23:29.994 [2024-05-15 02:24:17.969377] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473ab0, cid 1, qid 0 00:23:29.994 [2024-05-15 02:24:17.969382] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473c10, cid 2, qid 0 00:23:29.994 [2024-05-15 02:24:17.969401] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:29.994 [2024-05-15 02:24:17.969407] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473ed0, cid 4, qid 0 00:23:29.994 [2024-05-15 02:24:17.969581] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.994 [2024-05-15 02:24:17.969589] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.994 [2024-05-15 02:24:17.969593] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.969598] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473ed0) on tqpair=0x242b280 00:23:29.994 [2024-05-15 02:24:17.969605] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:29.994 [2024-05-15 02:24:17.969611] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:29.994 [2024-05-15 02:24:17.969623] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.969629] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242b280) 00:23:29.994 [2024-05-15 02:24:17.969636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.994 [2024-05-15 02:24:17.969659] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473ed0, cid 4, qid 0 00:23:29.994 [2024-05-15 02:24:17.969770] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.994 [2024-05-15 02:24:17.969777] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.994 [2024-05-15 02:24:17.969781] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.969785] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242b280): datao=0, datal=4096, cccid=4 00:23:29.994 [2024-05-15 02:24:17.969791] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2473ed0) on tqpair(0x242b280): expected_datao=0, payload_size=4096 00:23:29.994 [2024-05-15 02:24:17.969795] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.969803] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.969807] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.969822] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.994 [2024-05-15 02:24:17.969829] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.994 [2024-05-15 02:24:17.969833] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.969837] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473ed0) on tqpair=0x242b280 00:23:29.994 [2024-05-15 02:24:17.969852] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:29.994 [2024-05-15 02:24:17.969884] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.969890] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242b280) 00:23:29.994 [2024-05-15 02:24:17.969898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.994 [2024-05-15 02:24:17.969906] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.969911] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.969915] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x242b280) 00:23:29.994 [2024-05-15 02:24:17.969922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.994 [2024-05-15 02:24:17.969948] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473ed0, cid 4, qid 0 00:23:29.994 [2024-05-15 02:24:17.969956] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2474030, cid 5, qid 0 00:23:29.994 [2024-05-15 02:24:17.970111] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.994 [2024-05-15 02:24:17.970118] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.994 [2024-05-15 02:24:17.970122] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.970126] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242b280): datao=0, datal=1024, cccid=4 00:23:29.994 [2024-05-15 02:24:17.970131] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2473ed0) on tqpair(0x242b280): expected_datao=0, payload_size=1024 00:23:29.994 [2024-05-15 02:24:17.970136] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.970143] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.970147] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.970154] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.994 [2024-05-15 02:24:17.970160] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.994 [2024-05-15 02:24:17.970164] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.994 [2024-05-15 02:24:17.970168] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2474030) on tqpair=0x242b280 00:23:30.261 [2024-05-15 02:24:18.010533] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.261 [2024-05-15 02:24:18.010576] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.261 [2024-05-15 02:24:18.010582] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.261 [2024-05-15 02:24:18.010588] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473ed0) on tqpair=0x242b280 00:23:30.261 [2024-05-15 02:24:18.010623] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.261 [2024-05-15 02:24:18.010629] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242b280) 00:23:30.261 [2024-05-15 02:24:18.010642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.261 [2024-05-15 02:24:18.010678] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473ed0, cid 4, qid 0 00:23:30.261 [2024-05-15 02:24:18.010845] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:30.261 [2024-05-15 02:24:18.010852] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:30.261 [2024-05-15 02:24:18.010856] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:30.261 [2024-05-15 02:24:18.010860] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242b280): datao=0, datal=3072, cccid=4 00:23:30.261 [2024-05-15 02:24:18.010866] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2473ed0) on tqpair(0x242b280): expected_datao=0, payload_size=3072 00:23:30.261 [2024-05-15 02:24:18.010871] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.261 [2024-05-15 02:24:18.010880] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:30.261 [2024-05-15 02:24:18.010885] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:30.261 [2024-05-15 02:24:18.010896] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.261 [2024-05-15 02:24:18.010903] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.261 [2024-05-15 02:24:18.010907] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.261 [2024-05-15 02:24:18.010911] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473ed0) on tqpair=0x242b280 00:23:30.261 [2024-05-15 02:24:18.010924] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.261 [2024-05-15 02:24:18.010929] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242b280) 00:23:30.261 [2024-05-15 02:24:18.010937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.261 [2024-05-15 02:24:18.010963] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473ed0, cid 4, qid 0 00:23:30.261 [2024-05-15 02:24:18.011089] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:30.261 [2024-05-15 02:24:18.011102] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:30.261 [2024-05-15 02:24:18.011107] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:30.261 [2024-05-15 02:24:18.011111] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242b280): datao=0, datal=8, cccid=4 00:23:30.261 [2024-05-15 02:24:18.011117] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2473ed0) on tqpair(0x242b280): expected_datao=0, payload_size=8 00:23:30.261 [2024-05-15 02:24:18.011122] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.261 [2024-05-15 02:24:18.011129] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:30.261 [2024-05-15 02:24:18.011133] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:30.261 [2024-05-15 02:24:18.055425] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.261 [2024-05-15 02:24:18.055447] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.261 [2024-05-15 02:24:18.055452] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.261 [2024-05-15 02:24:18.055457] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473ed0) on tqpair=0x242b280 00:23:30.261 ===================================================== 00:23:30.261 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:30.261 ===================================================== 00:23:30.261 Controller Capabilities/Features 00:23:30.261 ================================ 00:23:30.261 Vendor ID: 0000 00:23:30.261 Subsystem Vendor ID: 0000 00:23:30.261 Serial Number: .................... 00:23:30.261 Model Number: ........................................ 00:23:30.261 Firmware Version: 24.05 00:23:30.261 Recommended Arb Burst: 0 00:23:30.261 IEEE OUI Identifier: 00 00 00 00:23:30.261 Multi-path I/O 00:23:30.261 May have multiple subsystem ports: No 00:23:30.261 May have multiple controllers: No 00:23:30.261 Associated with SR-IOV VF: No 00:23:30.261 Max Data Transfer Size: 131072 00:23:30.261 Max Number of Namespaces: 0 00:23:30.261 Max Number of I/O Queues: 1024 00:23:30.261 NVMe Specification Version (VS): 1.3 00:23:30.261 NVMe Specification Version (Identify): 1.3 00:23:30.261 Maximum Queue Entries: 128 00:23:30.261 Contiguous Queues Required: Yes 00:23:30.261 Arbitration Mechanisms Supported 00:23:30.261 Weighted Round Robin: Not Supported 00:23:30.261 Vendor Specific: Not Supported 00:23:30.261 Reset Timeout: 15000 ms 00:23:30.261 Doorbell Stride: 4 bytes 00:23:30.261 NVM Subsystem Reset: Not Supported 00:23:30.261 Command Sets Supported 00:23:30.261 NVM Command Set: Supported 00:23:30.261 Boot Partition: Not Supported 00:23:30.261 Memory Page Size Minimum: 4096 bytes 00:23:30.261 Memory Page Size Maximum: 4096 bytes 00:23:30.261 Persistent Memory Region: Not Supported 00:23:30.261 Optional Asynchronous Events Supported 00:23:30.261 Namespace Attribute Notices: Not Supported 00:23:30.261 Firmware Activation Notices: Not Supported 00:23:30.261 ANA Change Notices: Not Supported 00:23:30.261 PLE Aggregate Log Change Notices: Not Supported 00:23:30.261 LBA Status Info Alert Notices: Not Supported 00:23:30.261 EGE Aggregate Log Change Notices: Not Supported 00:23:30.261 Normal NVM Subsystem Shutdown event: Not Supported 00:23:30.261 Zone Descriptor Change Notices: Not Supported 00:23:30.261 Discovery Log Change Notices: Supported 00:23:30.261 Controller Attributes 00:23:30.261 128-bit Host Identifier: Not Supported 00:23:30.261 Non-Operational Permissive Mode: Not Supported 00:23:30.261 NVM Sets: Not Supported 00:23:30.261 Read Recovery Levels: Not Supported 00:23:30.261 Endurance Groups: Not Supported 00:23:30.261 Predictable Latency Mode: Not Supported 00:23:30.261 Traffic Based Keep ALive: Not Supported 00:23:30.261 Namespace Granularity: Not Supported 00:23:30.261 SQ Associations: Not Supported 00:23:30.261 UUID List: Not Supported 00:23:30.261 Multi-Domain Subsystem: Not Supported 00:23:30.261 Fixed Capacity Management: Not Supported 00:23:30.261 Variable Capacity Management: Not Supported 00:23:30.261 Delete Endurance Group: Not Supported 00:23:30.261 Delete NVM Set: Not Supported 00:23:30.261 Extended LBA Formats Supported: Not Supported 00:23:30.261 Flexible Data Placement Supported: Not Supported 00:23:30.261 00:23:30.261 Controller Memory Buffer Support 00:23:30.261 ================================ 00:23:30.261 Supported: No 00:23:30.261 00:23:30.261 Persistent Memory Region Support 00:23:30.261 ================================ 00:23:30.261 Supported: No 00:23:30.261 00:23:30.261 Admin Command Set Attributes 00:23:30.261 ============================ 00:23:30.261 Security Send/Receive: Not Supported 00:23:30.261 Format NVM: Not Supported 00:23:30.261 Firmware Activate/Download: Not Supported 00:23:30.261 Namespace Management: Not Supported 00:23:30.261 Device Self-Test: Not Supported 00:23:30.261 Directives: Not Supported 00:23:30.261 NVMe-MI: Not Supported 00:23:30.261 Virtualization Management: Not Supported 00:23:30.262 Doorbell Buffer Config: Not Supported 00:23:30.262 Get LBA Status Capability: Not Supported 00:23:30.262 Command & Feature Lockdown Capability: Not Supported 00:23:30.262 Abort Command Limit: 1 00:23:30.262 Async Event Request Limit: 4 00:23:30.262 Number of Firmware Slots: N/A 00:23:30.262 Firmware Slot 1 Read-Only: N/A 00:23:30.262 Firmware Activation Without Reset: N/A 00:23:30.262 Multiple Update Detection Support: N/A 00:23:30.262 Firmware Update Granularity: No Information Provided 00:23:30.262 Per-Namespace SMART Log: No 00:23:30.262 Asymmetric Namespace Access Log Page: Not Supported 00:23:30.262 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:30.262 Command Effects Log Page: Not Supported 00:23:30.262 Get Log Page Extended Data: Supported 00:23:30.262 Telemetry Log Pages: Not Supported 00:23:30.262 Persistent Event Log Pages: Not Supported 00:23:30.262 Supported Log Pages Log Page: May Support 00:23:30.262 Commands Supported & Effects Log Page: Not Supported 00:23:30.262 Feature Identifiers & Effects Log Page:May Support 00:23:30.262 NVMe-MI Commands & Effects Log Page: May Support 00:23:30.262 Data Area 4 for Telemetry Log: Not Supported 00:23:30.262 Error Log Page Entries Supported: 128 00:23:30.262 Keep Alive: Not Supported 00:23:30.262 00:23:30.262 NVM Command Set Attributes 00:23:30.262 ========================== 00:23:30.262 Submission Queue Entry Size 00:23:30.262 Max: 1 00:23:30.262 Min: 1 00:23:30.262 Completion Queue Entry Size 00:23:30.262 Max: 1 00:23:30.262 Min: 1 00:23:30.262 Number of Namespaces: 0 00:23:30.262 Compare Command: Not Supported 00:23:30.262 Write Uncorrectable Command: Not Supported 00:23:30.262 Dataset Management Command: Not Supported 00:23:30.262 Write Zeroes Command: Not Supported 00:23:30.262 Set Features Save Field: Not Supported 00:23:30.262 Reservations: Not Supported 00:23:30.262 Timestamp: Not Supported 00:23:30.262 Copy: Not Supported 00:23:30.262 Volatile Write Cache: Not Present 00:23:30.262 Atomic Write Unit (Normal): 1 00:23:30.262 Atomic Write Unit (PFail): 1 00:23:30.262 Atomic Compare & Write Unit: 1 00:23:30.262 Fused Compare & Write: Supported 00:23:30.262 Scatter-Gather List 00:23:30.262 SGL Command Set: Supported 00:23:30.262 SGL Keyed: Supported 00:23:30.262 SGL Bit Bucket Descriptor: Not Supported 00:23:30.262 SGL Metadata Pointer: Not Supported 00:23:30.262 Oversized SGL: Not Supported 00:23:30.262 SGL Metadata Address: Not Supported 00:23:30.262 SGL Offset: Supported 00:23:30.262 Transport SGL Data Block: Not Supported 00:23:30.262 Replay Protected Memory Block: Not Supported 00:23:30.262 00:23:30.262 Firmware Slot Information 00:23:30.262 ========================= 00:23:30.262 Active slot: 0 00:23:30.262 00:23:30.262 00:23:30.262 Error Log 00:23:30.262 ========= 00:23:30.262 00:23:30.262 Active Namespaces 00:23:30.262 ================= 00:23:30.262 Discovery Log Page 00:23:30.262 ================== 00:23:30.262 Generation Counter: 2 00:23:30.262 Number of Records: 2 00:23:30.262 Record Format: 0 00:23:30.262 00:23:30.262 Discovery Log Entry 0 00:23:30.262 ---------------------- 00:23:30.262 Transport Type: 3 (TCP) 00:23:30.262 Address Family: 1 (IPv4) 00:23:30.262 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:30.262 Entry Flags: 00:23:30.262 Duplicate Returned Information: 1 00:23:30.262 Explicit Persistent Connection Support for Discovery: 1 00:23:30.262 Transport Requirements: 00:23:30.262 Secure Channel: Not Required 00:23:30.262 Port ID: 0 (0x0000) 00:23:30.262 Controller ID: 65535 (0xffff) 00:23:30.262 Admin Max SQ Size: 128 00:23:30.262 Transport Service Identifier: 4420 00:23:30.262 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:30.262 Transport Address: 10.0.0.2 00:23:30.262 Discovery Log Entry 1 00:23:30.262 ---------------------- 00:23:30.262 Transport Type: 3 (TCP) 00:23:30.262 Address Family: 1 (IPv4) 00:23:30.262 Subsystem Type: 2 (NVM Subsystem) 00:23:30.262 Entry Flags: 00:23:30.262 Duplicate Returned Information: 0 00:23:30.262 Explicit Persistent Connection Support for Discovery: 0 00:23:30.262 Transport Requirements: 00:23:30.262 Secure Channel: Not Required 00:23:30.262 Port ID: 0 (0x0000) 00:23:30.262 Controller ID: 65535 (0xffff) 00:23:30.262 Admin Max SQ Size: 128 00:23:30.262 Transport Service Identifier: 4420 00:23:30.262 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:30.262 Transport Address: 10.0.0.2 [2024-05-15 02:24:18.055573] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:30.262 [2024-05-15 02:24:18.055593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.262 [2024-05-15 02:24:18.055602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.262 [2024-05-15 02:24:18.055609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.262 [2024-05-15 02:24:18.055616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.262 [2024-05-15 02:24:18.055628] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.262 [2024-05-15 02:24:18.055633] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.262 [2024-05-15 02:24:18.055637] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.262 [2024-05-15 02:24:18.055647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.262 [2024-05-15 02:24:18.055675] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.262 [2024-05-15 02:24:18.055766] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.262 [2024-05-15 02:24:18.055774] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.262 [2024-05-15 02:24:18.055778] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.262 [2024-05-15 02:24:18.055782] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.262 [2024-05-15 02:24:18.055792] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.262 [2024-05-15 02:24:18.055797] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.262 [2024-05-15 02:24:18.055801] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.262 [2024-05-15 02:24:18.055809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.262 [2024-05-15 02:24:18.055834] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.262 [2024-05-15 02:24:18.055933] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.262 [2024-05-15 02:24:18.055940] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.262 [2024-05-15 02:24:18.055944] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.262 [2024-05-15 02:24:18.055948] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.262 [2024-05-15 02:24:18.055954] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:30.262 [2024-05-15 02:24:18.055960] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:30.262 [2024-05-15 02:24:18.055971] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.262 [2024-05-15 02:24:18.055976] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.262 [2024-05-15 02:24:18.055980] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.262 [2024-05-15 02:24:18.055988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.262 [2024-05-15 02:24:18.056007] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.262 [2024-05-15 02:24:18.056074] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.262 [2024-05-15 02:24:18.056081] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.262 [2024-05-15 02:24:18.056085] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.262 [2024-05-15 02:24:18.056090] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.262 [2024-05-15 02:24:18.056102] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.262 [2024-05-15 02:24:18.056107] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.262 [2024-05-15 02:24:18.056111] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.262 [2024-05-15 02:24:18.056119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.262 [2024-05-15 02:24:18.056138] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.262 [2024-05-15 02:24:18.056202] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.262 [2024-05-15 02:24:18.056209] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.262 [2024-05-15 02:24:18.056213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.262 [2024-05-15 02:24:18.056217] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.262 [2024-05-15 02:24:18.056229] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.262 [2024-05-15 02:24:18.056234] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.262 [2024-05-15 02:24:18.056238] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.262 [2024-05-15 02:24:18.056246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.262 [2024-05-15 02:24:18.056264] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.262 [2024-05-15 02:24:18.056330] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.262 [2024-05-15 02:24:18.056337] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.262 [2024-05-15 02:24:18.056341] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.262 [2024-05-15 02:24:18.056345] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.263 [2024-05-15 02:24:18.056357] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.056362] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.056366] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.263 [2024-05-15 02:24:18.056374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.263 [2024-05-15 02:24:18.056409] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.263 [2024-05-15 02:24:18.056478] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.263 [2024-05-15 02:24:18.056485] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.263 [2024-05-15 02:24:18.056489] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.056493] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.263 [2024-05-15 02:24:18.056505] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.056510] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.056514] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.263 [2024-05-15 02:24:18.056522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.263 [2024-05-15 02:24:18.056542] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.263 [2024-05-15 02:24:18.056610] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.263 [2024-05-15 02:24:18.056617] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.263 [2024-05-15 02:24:18.056621] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.056625] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.263 [2024-05-15 02:24:18.056637] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.056642] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.056646] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.263 [2024-05-15 02:24:18.056654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.263 [2024-05-15 02:24:18.056672] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.263 [2024-05-15 02:24:18.056739] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.263 [2024-05-15 02:24:18.056746] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.263 [2024-05-15 02:24:18.056750] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.056754] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.263 [2024-05-15 02:24:18.056766] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.056771] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.056775] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.263 [2024-05-15 02:24:18.056783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.263 [2024-05-15 02:24:18.056801] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.263 [2024-05-15 02:24:18.056866] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.263 [2024-05-15 02:24:18.056873] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.263 [2024-05-15 02:24:18.056877] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.056882] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.263 [2024-05-15 02:24:18.056893] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.056898] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.056902] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.263 [2024-05-15 02:24:18.056910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.263 [2024-05-15 02:24:18.056928] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.263 [2024-05-15 02:24:18.056997] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.263 [2024-05-15 02:24:18.057005] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.263 [2024-05-15 02:24:18.057008] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057013] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.263 [2024-05-15 02:24:18.057024] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057029] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057033] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.263 [2024-05-15 02:24:18.057041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.263 [2024-05-15 02:24:18.057059] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.263 [2024-05-15 02:24:18.057123] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.263 [2024-05-15 02:24:18.057130] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.263 [2024-05-15 02:24:18.057134] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057138] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.263 [2024-05-15 02:24:18.057150] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057155] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057159] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.263 [2024-05-15 02:24:18.057166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.263 [2024-05-15 02:24:18.057185] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.263 [2024-05-15 02:24:18.057254] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.263 [2024-05-15 02:24:18.057261] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.263 [2024-05-15 02:24:18.057265] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057269] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.263 [2024-05-15 02:24:18.057281] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057286] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057290] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.263 [2024-05-15 02:24:18.057298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.263 [2024-05-15 02:24:18.057316] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.263 [2024-05-15 02:24:18.057380] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.263 [2024-05-15 02:24:18.057398] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.263 [2024-05-15 02:24:18.057403] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057407] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.263 [2024-05-15 02:24:18.057420] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057426] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057430] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.263 [2024-05-15 02:24:18.057438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.263 [2024-05-15 02:24:18.057458] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.263 [2024-05-15 02:24:18.057539] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.263 [2024-05-15 02:24:18.057546] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.263 [2024-05-15 02:24:18.057550] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057555] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.263 [2024-05-15 02:24:18.057567] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057572] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057576] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.263 [2024-05-15 02:24:18.057584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.263 [2024-05-15 02:24:18.057604] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.263 [2024-05-15 02:24:18.057671] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.263 [2024-05-15 02:24:18.057678] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.263 [2024-05-15 02:24:18.057682] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057686] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.263 [2024-05-15 02:24:18.057698] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057703] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057707] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.263 [2024-05-15 02:24:18.057723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.263 [2024-05-15 02:24:18.057742] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.263 [2024-05-15 02:24:18.057806] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.263 [2024-05-15 02:24:18.057813] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.263 [2024-05-15 02:24:18.057816] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057821] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.263 [2024-05-15 02:24:18.057833] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057838] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057842] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.263 [2024-05-15 02:24:18.057850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.263 [2024-05-15 02:24:18.057868] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.263 [2024-05-15 02:24:18.057932] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.263 [2024-05-15 02:24:18.057939] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.263 [2024-05-15 02:24:18.057943] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057947] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.263 [2024-05-15 02:24:18.057959] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.263 [2024-05-15 02:24:18.057964] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.057968] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.264 [2024-05-15 02:24:18.057976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.264 [2024-05-15 02:24:18.057995] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.264 [2024-05-15 02:24:18.058059] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.264 [2024-05-15 02:24:18.058066] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.264 [2024-05-15 02:24:18.058070] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058074] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.264 [2024-05-15 02:24:18.058086] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058091] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058095] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.264 [2024-05-15 02:24:18.058103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.264 [2024-05-15 02:24:18.058121] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.264 [2024-05-15 02:24:18.058189] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.264 [2024-05-15 02:24:18.058196] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.264 [2024-05-15 02:24:18.058200] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058204] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.264 [2024-05-15 02:24:18.058216] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058222] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058226] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.264 [2024-05-15 02:24:18.058233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.264 [2024-05-15 02:24:18.058252] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.264 [2024-05-15 02:24:18.058315] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.264 [2024-05-15 02:24:18.058322] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.264 [2024-05-15 02:24:18.058326] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058330] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.264 [2024-05-15 02:24:18.058342] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058347] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058351] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.264 [2024-05-15 02:24:18.058359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.264 [2024-05-15 02:24:18.058377] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.264 [2024-05-15 02:24:18.058458] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.264 [2024-05-15 02:24:18.058468] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.264 [2024-05-15 02:24:18.058471] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058476] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.264 [2024-05-15 02:24:18.058488] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058493] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058497] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.264 [2024-05-15 02:24:18.058505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.264 [2024-05-15 02:24:18.058525] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.264 [2024-05-15 02:24:18.058588] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.264 [2024-05-15 02:24:18.058595] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.264 [2024-05-15 02:24:18.058599] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058603] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.264 [2024-05-15 02:24:18.058616] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058621] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058625] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.264 [2024-05-15 02:24:18.058632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.264 [2024-05-15 02:24:18.058651] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.264 [2024-05-15 02:24:18.058719] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.264 [2024-05-15 02:24:18.058732] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.264 [2024-05-15 02:24:18.058737] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058741] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.264 [2024-05-15 02:24:18.058754] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058759] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058763] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.264 [2024-05-15 02:24:18.058771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.264 [2024-05-15 02:24:18.058791] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.264 [2024-05-15 02:24:18.058856] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.264 [2024-05-15 02:24:18.058876] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.264 [2024-05-15 02:24:18.058880] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058885] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.264 [2024-05-15 02:24:18.058897] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058903] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.058907] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.264 [2024-05-15 02:24:18.058915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.264 [2024-05-15 02:24:18.058935] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.264 [2024-05-15 02:24:18.059002] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.264 [2024-05-15 02:24:18.059009] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.264 [2024-05-15 02:24:18.059013] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.059017] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.264 [2024-05-15 02:24:18.059029] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.059034] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.059038] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.264 [2024-05-15 02:24:18.059046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.264 [2024-05-15 02:24:18.059064] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.264 [2024-05-15 02:24:18.059127] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.264 [2024-05-15 02:24:18.059134] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.264 [2024-05-15 02:24:18.059138] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.059142] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.264 [2024-05-15 02:24:18.059154] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.059159] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.059163] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.264 [2024-05-15 02:24:18.059171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.264 [2024-05-15 02:24:18.059190] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.264 [2024-05-15 02:24:18.059256] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.264 [2024-05-15 02:24:18.059263] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.264 [2024-05-15 02:24:18.059267] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.059271] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.264 [2024-05-15 02:24:18.059283] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.059288] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.059292] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.264 [2024-05-15 02:24:18.059300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.264 [2024-05-15 02:24:18.059318] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.264 [2024-05-15 02:24:18.063398] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.264 [2024-05-15 02:24:18.063419] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.264 [2024-05-15 02:24:18.063425] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.063429] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.264 [2024-05-15 02:24:18.063446] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.063451] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.063455] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242b280) 00:23:30.264 [2024-05-15 02:24:18.063464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.264 [2024-05-15 02:24:18.063492] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473d70, cid 3, qid 0 00:23:30.264 [2024-05-15 02:24:18.063567] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.264 [2024-05-15 02:24:18.063575] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.264 [2024-05-15 02:24:18.063578] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.264 [2024-05-15 02:24:18.063583] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2473d70) on tqpair=0x242b280 00:23:30.264 [2024-05-15 02:24:18.063592] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:23:30.265 00:23:30.265 02:24:18 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:30.265 [2024-05-15 02:24:18.099375] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:30.265 [2024-05-15 02:24:18.099434] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81994 ] 00:23:30.265 [2024-05-15 02:24:18.237843] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:30.265 [2024-05-15 02:24:18.237938] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:30.265 [2024-05-15 02:24:18.237946] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:30.265 [2024-05-15 02:24:18.237963] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:30.265 [2024-05-15 02:24:18.237973] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:30.265 [2024-05-15 02:24:18.238138] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:30.265 [2024-05-15 02:24:18.238193] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe67280 0 00:23:30.265 [2024-05-15 02:24:18.242408] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:30.265 [2024-05-15 02:24:18.242434] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:30.265 [2024-05-15 02:24:18.242440] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:30.265 [2024-05-15 02:24:18.242444] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:30.265 [2024-05-15 02:24:18.242492] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.242499] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.242504] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67280) 00:23:30.265 [2024-05-15 02:24:18.242520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:30.265 [2024-05-15 02:24:18.242553] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaf950, cid 0, qid 0 00:23:30.265 [2024-05-15 02:24:18.250416] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.265 [2024-05-15 02:24:18.250445] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.265 [2024-05-15 02:24:18.250451] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.250457] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeaf950) on tqpair=0xe67280 00:23:30.265 [2024-05-15 02:24:18.250469] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:30.265 [2024-05-15 02:24:18.250482] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:30.265 [2024-05-15 02:24:18.250490] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:30.265 [2024-05-15 02:24:18.250510] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.250516] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.250521] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67280) 00:23:30.265 [2024-05-15 02:24:18.250535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.265 [2024-05-15 02:24:18.250568] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaf950, cid 0, qid 0 00:23:30.265 [2024-05-15 02:24:18.250641] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.265 [2024-05-15 02:24:18.250648] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.265 [2024-05-15 02:24:18.250652] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.250657] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeaf950) on tqpair=0xe67280 00:23:30.265 [2024-05-15 02:24:18.250663] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:30.265 [2024-05-15 02:24:18.250671] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:30.265 [2024-05-15 02:24:18.250680] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.250685] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.250689] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67280) 00:23:30.265 [2024-05-15 02:24:18.250697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.265 [2024-05-15 02:24:18.250718] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaf950, cid 0, qid 0 00:23:30.265 [2024-05-15 02:24:18.250777] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.265 [2024-05-15 02:24:18.250784] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.265 [2024-05-15 02:24:18.250788] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.250793] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeaf950) on tqpair=0xe67280 00:23:30.265 [2024-05-15 02:24:18.250799] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:30.265 [2024-05-15 02:24:18.250809] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:30.265 [2024-05-15 02:24:18.250817] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.250821] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.250825] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67280) 00:23:30.265 [2024-05-15 02:24:18.250833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.265 [2024-05-15 02:24:18.250853] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaf950, cid 0, qid 0 00:23:30.265 [2024-05-15 02:24:18.250915] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.265 [2024-05-15 02:24:18.250922] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.265 [2024-05-15 02:24:18.250926] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.250930] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeaf950) on tqpair=0xe67280 00:23:30.265 [2024-05-15 02:24:18.250937] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:30.265 [2024-05-15 02:24:18.250948] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.250953] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.250957] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67280) 00:23:30.265 [2024-05-15 02:24:18.250965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.265 [2024-05-15 02:24:18.250985] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaf950, cid 0, qid 0 00:23:30.265 [2024-05-15 02:24:18.251050] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.265 [2024-05-15 02:24:18.251057] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.265 [2024-05-15 02:24:18.251061] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.251065] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeaf950) on tqpair=0xe67280 00:23:30.265 [2024-05-15 02:24:18.251070] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:30.265 [2024-05-15 02:24:18.251076] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:30.265 [2024-05-15 02:24:18.251085] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:30.265 [2024-05-15 02:24:18.251191] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:30.265 [2024-05-15 02:24:18.251196] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:30.265 [2024-05-15 02:24:18.251206] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.251210] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.251214] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67280) 00:23:30.265 [2024-05-15 02:24:18.251222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.265 [2024-05-15 02:24:18.251243] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaf950, cid 0, qid 0 00:23:30.265 [2024-05-15 02:24:18.251299] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.265 [2024-05-15 02:24:18.251307] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.265 [2024-05-15 02:24:18.251311] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.265 [2024-05-15 02:24:18.251315] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeaf950) on tqpair=0xe67280 00:23:30.265 [2024-05-15 02:24:18.251321] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:30.265 [2024-05-15 02:24:18.251332] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251337] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251341] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67280) 00:23:30.266 [2024-05-15 02:24:18.251348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.266 [2024-05-15 02:24:18.251368] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaf950, cid 0, qid 0 00:23:30.266 [2024-05-15 02:24:18.251439] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.266 [2024-05-15 02:24:18.251448] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.266 [2024-05-15 02:24:18.251452] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251457] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeaf950) on tqpair=0xe67280 00:23:30.266 [2024-05-15 02:24:18.251462] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:30.266 [2024-05-15 02:24:18.251468] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:30.266 [2024-05-15 02:24:18.251477] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:30.266 [2024-05-15 02:24:18.251493] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:30.266 [2024-05-15 02:24:18.251506] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251510] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67280) 00:23:30.266 [2024-05-15 02:24:18.251519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.266 [2024-05-15 02:24:18.251542] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaf950, cid 0, qid 0 00:23:30.266 [2024-05-15 02:24:18.251643] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:30.266 [2024-05-15 02:24:18.251650] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:30.266 [2024-05-15 02:24:18.251655] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251659] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe67280): datao=0, datal=4096, cccid=0 00:23:30.266 [2024-05-15 02:24:18.251665] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeaf950) on tqpair(0xe67280): expected_datao=0, payload_size=4096 00:23:30.266 [2024-05-15 02:24:18.251671] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251680] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251685] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251694] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.266 [2024-05-15 02:24:18.251701] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.266 [2024-05-15 02:24:18.251704] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251709] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeaf950) on tqpair=0xe67280 00:23:30.266 [2024-05-15 02:24:18.251719] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:30.266 [2024-05-15 02:24:18.251725] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:30.266 [2024-05-15 02:24:18.251730] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:30.266 [2024-05-15 02:24:18.251735] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:30.266 [2024-05-15 02:24:18.251740] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:30.266 [2024-05-15 02:24:18.251746] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:30.266 [2024-05-15 02:24:18.251756] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:30.266 [2024-05-15 02:24:18.251768] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251774] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251778] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67280) 00:23:30.266 [2024-05-15 02:24:18.251786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:30.266 [2024-05-15 02:24:18.251808] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaf950, cid 0, qid 0 00:23:30.266 [2024-05-15 02:24:18.251874] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.266 [2024-05-15 02:24:18.251882] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.266 [2024-05-15 02:24:18.251886] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251890] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeaf950) on tqpair=0xe67280 00:23:30.266 [2024-05-15 02:24:18.251899] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251903] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251908] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67280) 00:23:30.266 [2024-05-15 02:24:18.251914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.266 [2024-05-15 02:24:18.251921] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251926] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251930] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe67280) 00:23:30.266 [2024-05-15 02:24:18.251936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.266 [2024-05-15 02:24:18.251943] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251947] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251951] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe67280) 00:23:30.266 [2024-05-15 02:24:18.251957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.266 [2024-05-15 02:24:18.251964] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251969] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.251972] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.266 [2024-05-15 02:24:18.251979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.266 [2024-05-15 02:24:18.251984] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:30.266 [2024-05-15 02:24:18.251998] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:30.266 [2024-05-15 02:24:18.252006] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.252010] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe67280) 00:23:30.266 [2024-05-15 02:24:18.252018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.266 [2024-05-15 02:24:18.252040] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaf950, cid 0, qid 0 00:23:30.266 [2024-05-15 02:24:18.252048] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafab0, cid 1, qid 0 00:23:30.266 [2024-05-15 02:24:18.252053] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafc10, cid 2, qid 0 00:23:30.266 [2024-05-15 02:24:18.252058] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.266 [2024-05-15 02:24:18.252064] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafed0, cid 4, qid 0 00:23:30.266 [2024-05-15 02:24:18.252161] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.266 [2024-05-15 02:24:18.252169] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.266 [2024-05-15 02:24:18.252173] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.252177] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafed0) on tqpair=0xe67280 00:23:30.266 [2024-05-15 02:24:18.252183] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:30.266 [2024-05-15 02:24:18.252188] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:30.266 [2024-05-15 02:24:18.252201] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:30.266 [2024-05-15 02:24:18.252209] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:30.266 [2024-05-15 02:24:18.252217] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.252221] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.252226] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe67280) 00:23:30.266 [2024-05-15 02:24:18.252233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:30.266 [2024-05-15 02:24:18.252253] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafed0, cid 4, qid 0 00:23:30.266 [2024-05-15 02:24:18.252320] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.266 [2024-05-15 02:24:18.252328] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.266 [2024-05-15 02:24:18.252332] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.252336] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafed0) on tqpair=0xe67280 00:23:30.266 [2024-05-15 02:24:18.252407] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:30.266 [2024-05-15 02:24:18.252422] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:30.266 [2024-05-15 02:24:18.252432] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.252436] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe67280) 00:23:30.266 [2024-05-15 02:24:18.252444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.266 [2024-05-15 02:24:18.252467] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafed0, cid 4, qid 0 00:23:30.266 [2024-05-15 02:24:18.252541] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:30.266 [2024-05-15 02:24:18.252548] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:30.266 [2024-05-15 02:24:18.252552] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.252557] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe67280): datao=0, datal=4096, cccid=4 00:23:30.266 [2024-05-15 02:24:18.252562] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeafed0) on tqpair(0xe67280): expected_datao=0, payload_size=4096 00:23:30.266 [2024-05-15 02:24:18.252567] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.252575] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.252579] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:30.266 [2024-05-15 02:24:18.252588] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.267 [2024-05-15 02:24:18.252594] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.267 [2024-05-15 02:24:18.252598] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.252602] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafed0) on tqpair=0xe67280 00:23:30.267 [2024-05-15 02:24:18.252618] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:30.267 [2024-05-15 02:24:18.252632] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:30.267 [2024-05-15 02:24:18.252643] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:30.267 [2024-05-15 02:24:18.252652] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.252656] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe67280) 00:23:30.267 [2024-05-15 02:24:18.252667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.267 [2024-05-15 02:24:18.252690] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafed0, cid 4, qid 0 00:23:30.267 [2024-05-15 02:24:18.252769] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:30.267 [2024-05-15 02:24:18.252776] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:30.267 [2024-05-15 02:24:18.252780] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.252784] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe67280): datao=0, datal=4096, cccid=4 00:23:30.267 [2024-05-15 02:24:18.252789] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeafed0) on tqpair(0xe67280): expected_datao=0, payload_size=4096 00:23:30.267 [2024-05-15 02:24:18.252794] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.252802] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.252806] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.252815] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.267 [2024-05-15 02:24:18.252821] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.267 [2024-05-15 02:24:18.252825] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.252829] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafed0) on tqpair=0xe67280 00:23:30.267 [2024-05-15 02:24:18.252845] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:30.267 [2024-05-15 02:24:18.252857] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:30.267 [2024-05-15 02:24:18.252866] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.252871] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe67280) 00:23:30.267 [2024-05-15 02:24:18.252880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.267 [2024-05-15 02:24:18.252902] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafed0, cid 4, qid 0 00:23:30.267 [2024-05-15 02:24:18.252971] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:30.267 [2024-05-15 02:24:18.252978] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:30.267 [2024-05-15 02:24:18.252982] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.252986] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe67280): datao=0, datal=4096, cccid=4 00:23:30.267 [2024-05-15 02:24:18.252991] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeafed0) on tqpair(0xe67280): expected_datao=0, payload_size=4096 00:23:30.267 [2024-05-15 02:24:18.252996] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253003] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253008] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253016] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.267 [2024-05-15 02:24:18.253023] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.267 [2024-05-15 02:24:18.253027] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253031] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafed0) on tqpair=0xe67280 00:23:30.267 [2024-05-15 02:24:18.253040] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:30.267 [2024-05-15 02:24:18.253050] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:30.267 [2024-05-15 02:24:18.253063] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:30.267 [2024-05-15 02:24:18.253071] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:30.267 [2024-05-15 02:24:18.253077] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:30.267 [2024-05-15 02:24:18.253082] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:30.267 [2024-05-15 02:24:18.253087] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:30.267 [2024-05-15 02:24:18.253093] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:30.267 [2024-05-15 02:24:18.253115] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253121] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe67280) 00:23:30.267 [2024-05-15 02:24:18.253128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.267 [2024-05-15 02:24:18.253136] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253141] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253145] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe67280) 00:23:30.267 [2024-05-15 02:24:18.253152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.267 [2024-05-15 02:24:18.253178] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafed0, cid 4, qid 0 00:23:30.267 [2024-05-15 02:24:18.253186] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb0030, cid 5, qid 0 00:23:30.267 [2024-05-15 02:24:18.253258] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.267 [2024-05-15 02:24:18.253266] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.267 [2024-05-15 02:24:18.253270] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253274] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafed0) on tqpair=0xe67280 00:23:30.267 [2024-05-15 02:24:18.253282] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.267 [2024-05-15 02:24:18.253288] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.267 [2024-05-15 02:24:18.253292] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253296] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeb0030) on tqpair=0xe67280 00:23:30.267 [2024-05-15 02:24:18.253307] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253312] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe67280) 00:23:30.267 [2024-05-15 02:24:18.253320] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.267 [2024-05-15 02:24:18.253340] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb0030, cid 5, qid 0 00:23:30.267 [2024-05-15 02:24:18.253422] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.267 [2024-05-15 02:24:18.253433] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.267 [2024-05-15 02:24:18.253437] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253441] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeb0030) on tqpair=0xe67280 00:23:30.267 [2024-05-15 02:24:18.253453] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253458] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe67280) 00:23:30.267 [2024-05-15 02:24:18.253476] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.267 [2024-05-15 02:24:18.253503] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb0030, cid 5, qid 0 00:23:30.267 [2024-05-15 02:24:18.253568] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.267 [2024-05-15 02:24:18.253575] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.267 [2024-05-15 02:24:18.253579] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253583] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeb0030) on tqpair=0xe67280 00:23:30.267 [2024-05-15 02:24:18.253595] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253600] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe67280) 00:23:30.267 [2024-05-15 02:24:18.253607] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.267 [2024-05-15 02:24:18.253627] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb0030, cid 5, qid 0 00:23:30.267 [2024-05-15 02:24:18.253683] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.267 [2024-05-15 02:24:18.253690] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.267 [2024-05-15 02:24:18.253698] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253702] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeb0030) on tqpair=0xe67280 00:23:30.267 [2024-05-15 02:24:18.253717] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253723] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe67280) 00:23:30.267 [2024-05-15 02:24:18.253730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.267 [2024-05-15 02:24:18.253739] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253743] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe67280) 00:23:30.267 [2024-05-15 02:24:18.253750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.267 [2024-05-15 02:24:18.253758] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253763] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xe67280) 00:23:30.267 [2024-05-15 02:24:18.253769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.267 [2024-05-15 02:24:18.253778] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.267 [2024-05-15 02:24:18.253782] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe67280) 00:23:30.267 [2024-05-15 02:24:18.253789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.267 [2024-05-15 02:24:18.253810] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb0030, cid 5, qid 0 00:23:30.267 [2024-05-15 02:24:18.253818] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafed0, cid 4, qid 0 00:23:30.268 [2024-05-15 02:24:18.253823] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb0190, cid 6, qid 0 00:23:30.268 [2024-05-15 02:24:18.253828] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb02f0, cid 7, qid 0 00:23:30.268 [2024-05-15 02:24:18.253965] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:30.268 [2024-05-15 02:24:18.253973] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:30.268 [2024-05-15 02:24:18.253977] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.253981] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe67280): datao=0, datal=8192, cccid=5 00:23:30.268 [2024-05-15 02:24:18.253986] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeb0030) on tqpair(0xe67280): expected_datao=0, payload_size=8192 00:23:30.268 [2024-05-15 02:24:18.253991] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254008] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254013] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254020] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:30.268 [2024-05-15 02:24:18.254026] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:30.268 [2024-05-15 02:24:18.254030] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254034] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe67280): datao=0, datal=512, cccid=4 00:23:30.268 [2024-05-15 02:24:18.254039] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeafed0) on tqpair(0xe67280): expected_datao=0, payload_size=512 00:23:30.268 [2024-05-15 02:24:18.254044] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254051] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254055] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254061] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:30.268 [2024-05-15 02:24:18.254067] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:30.268 [2024-05-15 02:24:18.254071] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254074] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe67280): datao=0, datal=512, cccid=6 00:23:30.268 [2024-05-15 02:24:18.254079] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeb0190) on tqpair(0xe67280): expected_datao=0, payload_size=512 00:23:30.268 [2024-05-15 02:24:18.254084] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254091] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254095] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254101] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:30.268 [2024-05-15 02:24:18.254107] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:30.268 [2024-05-15 02:24:18.254110] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254114] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe67280): datao=0, datal=4096, cccid=7 00:23:30.268 [2024-05-15 02:24:18.254119] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeb02f0) on tqpair(0xe67280): expected_datao=0, payload_size=4096 00:23:30.268 [2024-05-15 02:24:18.254124] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254131] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254135] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254144] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.268 [2024-05-15 02:24:18.254150] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.268 [2024-05-15 02:24:18.254154] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254159] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeb0030) on tqpair=0xe67280 00:23:30.268 [2024-05-15 02:24:18.254177] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.268 [2024-05-15 02:24:18.254184] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.268 [2024-05-15 02:24:18.254188] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254192] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafed0) on tqpair=0xe67280 00:23:30.268 [2024-05-15 02:24:18.254203] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.268 [2024-05-15 02:24:18.254209] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.268 [2024-05-15 02:24:18.254213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254217] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeb0190) on tqpair=0xe67280 00:23:30.268 [2024-05-15 02:24:18.254228] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.268 [2024-05-15 02:24:18.254235] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.268 [2024-05-15 02:24:18.254239] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.268 [2024-05-15 02:24:18.254243] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeb02f0) on tqpair=0xe67280 00:23:30.268 ===================================================== 00:23:30.268 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:30.268 ===================================================== 00:23:30.268 Controller Capabilities/Features 00:23:30.268 ================================ 00:23:30.268 Vendor ID: 8086 00:23:30.268 Subsystem Vendor ID: 8086 00:23:30.268 Serial Number: SPDK00000000000001 00:23:30.268 Model Number: SPDK bdev Controller 00:23:30.268 Firmware Version: 24.05 00:23:30.268 Recommended Arb Burst: 6 00:23:30.268 IEEE OUI Identifier: e4 d2 5c 00:23:30.268 Multi-path I/O 00:23:30.268 May have multiple subsystem ports: Yes 00:23:30.268 May have multiple controllers: Yes 00:23:30.268 Associated with SR-IOV VF: No 00:23:30.268 Max Data Transfer Size: 131072 00:23:30.268 Max Number of Namespaces: 32 00:23:30.268 Max Number of I/O Queues: 127 00:23:30.268 NVMe Specification Version (VS): 1.3 00:23:30.268 NVMe Specification Version (Identify): 1.3 00:23:30.268 Maximum Queue Entries: 128 00:23:30.268 Contiguous Queues Required: Yes 00:23:30.268 Arbitration Mechanisms Supported 00:23:30.268 Weighted Round Robin: Not Supported 00:23:30.268 Vendor Specific: Not Supported 00:23:30.268 Reset Timeout: 15000 ms 00:23:30.268 Doorbell Stride: 4 bytes 00:23:30.268 NVM Subsystem Reset: Not Supported 00:23:30.268 Command Sets Supported 00:23:30.268 NVM Command Set: Supported 00:23:30.268 Boot Partition: Not Supported 00:23:30.268 Memory Page Size Minimum: 4096 bytes 00:23:30.268 Memory Page Size Maximum: 4096 bytes 00:23:30.268 Persistent Memory Region: Not Supported 00:23:30.268 Optional Asynchronous Events Supported 00:23:30.268 Namespace Attribute Notices: Supported 00:23:30.268 Firmware Activation Notices: Not Supported 00:23:30.268 ANA Change Notices: Not Supported 00:23:30.268 PLE Aggregate Log Change Notices: Not Supported 00:23:30.268 LBA Status Info Alert Notices: Not Supported 00:23:30.268 EGE Aggregate Log Change Notices: Not Supported 00:23:30.268 Normal NVM Subsystem Shutdown event: Not Supported 00:23:30.268 Zone Descriptor Change Notices: Not Supported 00:23:30.268 Discovery Log Change Notices: Not Supported 00:23:30.268 Controller Attributes 00:23:30.268 128-bit Host Identifier: Supported 00:23:30.268 Non-Operational Permissive Mode: Not Supported 00:23:30.268 NVM Sets: Not Supported 00:23:30.268 Read Recovery Levels: Not Supported 00:23:30.268 Endurance Groups: Not Supported 00:23:30.268 Predictable Latency Mode: Not Supported 00:23:30.268 Traffic Based Keep ALive: Not Supported 00:23:30.268 Namespace Granularity: Not Supported 00:23:30.268 SQ Associations: Not Supported 00:23:30.268 UUID List: Not Supported 00:23:30.268 Multi-Domain Subsystem: Not Supported 00:23:30.268 Fixed Capacity Management: Not Supported 00:23:30.268 Variable Capacity Management: Not Supported 00:23:30.268 Delete Endurance Group: Not Supported 00:23:30.268 Delete NVM Set: Not Supported 00:23:30.268 Extended LBA Formats Supported: Not Supported 00:23:30.268 Flexible Data Placement Supported: Not Supported 00:23:30.268 00:23:30.268 Controller Memory Buffer Support 00:23:30.268 ================================ 00:23:30.268 Supported: No 00:23:30.268 00:23:30.268 Persistent Memory Region Support 00:23:30.268 ================================ 00:23:30.268 Supported: No 00:23:30.268 00:23:30.268 Admin Command Set Attributes 00:23:30.268 ============================ 00:23:30.268 Security Send/Receive: Not Supported 00:23:30.268 Format NVM: Not Supported 00:23:30.268 Firmware Activate/Download: Not Supported 00:23:30.268 Namespace Management: Not Supported 00:23:30.268 Device Self-Test: Not Supported 00:23:30.268 Directives: Not Supported 00:23:30.268 NVMe-MI: Not Supported 00:23:30.268 Virtualization Management: Not Supported 00:23:30.268 Doorbell Buffer Config: Not Supported 00:23:30.268 Get LBA Status Capability: Not Supported 00:23:30.268 Command & Feature Lockdown Capability: Not Supported 00:23:30.268 Abort Command Limit: 4 00:23:30.268 Async Event Request Limit: 4 00:23:30.268 Number of Firmware Slots: N/A 00:23:30.268 Firmware Slot 1 Read-Only: N/A 00:23:30.268 Firmware Activation Without Reset: N/A 00:23:30.268 Multiple Update Detection Support: N/A 00:23:30.268 Firmware Update Granularity: No Information Provided 00:23:30.268 Per-Namespace SMART Log: No 00:23:30.268 Asymmetric Namespace Access Log Page: Not Supported 00:23:30.268 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:30.268 Command Effects Log Page: Supported 00:23:30.268 Get Log Page Extended Data: Supported 00:23:30.268 Telemetry Log Pages: Not Supported 00:23:30.268 Persistent Event Log Pages: Not Supported 00:23:30.268 Supported Log Pages Log Page: May Support 00:23:30.268 Commands Supported & Effects Log Page: Not Supported 00:23:30.268 Feature Identifiers & Effects Log Page:May Support 00:23:30.268 NVMe-MI Commands & Effects Log Page: May Support 00:23:30.268 Data Area 4 for Telemetry Log: Not Supported 00:23:30.268 Error Log Page Entries Supported: 128 00:23:30.268 Keep Alive: Supported 00:23:30.268 Keep Alive Granularity: 10000 ms 00:23:30.268 00:23:30.268 NVM Command Set Attributes 00:23:30.268 ========================== 00:23:30.268 Submission Queue Entry Size 00:23:30.268 Max: 64 00:23:30.268 Min: 64 00:23:30.268 Completion Queue Entry Size 00:23:30.269 Max: 16 00:23:30.269 Min: 16 00:23:30.269 Number of Namespaces: 32 00:23:30.269 Compare Command: Supported 00:23:30.269 Write Uncorrectable Command: Not Supported 00:23:30.269 Dataset Management Command: Supported 00:23:30.269 Write Zeroes Command: Supported 00:23:30.269 Set Features Save Field: Not Supported 00:23:30.269 Reservations: Supported 00:23:30.269 Timestamp: Not Supported 00:23:30.269 Copy: Supported 00:23:30.269 Volatile Write Cache: Present 00:23:30.269 Atomic Write Unit (Normal): 1 00:23:30.269 Atomic Write Unit (PFail): 1 00:23:30.269 Atomic Compare & Write Unit: 1 00:23:30.269 Fused Compare & Write: Supported 00:23:30.269 Scatter-Gather List 00:23:30.269 SGL Command Set: Supported 00:23:30.269 SGL Keyed: Supported 00:23:30.269 SGL Bit Bucket Descriptor: Not Supported 00:23:30.269 SGL Metadata Pointer: Not Supported 00:23:30.269 Oversized SGL: Not Supported 00:23:30.269 SGL Metadata Address: Not Supported 00:23:30.269 SGL Offset: Supported 00:23:30.269 Transport SGL Data Block: Not Supported 00:23:30.269 Replay Protected Memory Block: Not Supported 00:23:30.269 00:23:30.269 Firmware Slot Information 00:23:30.269 ========================= 00:23:30.269 Active slot: 1 00:23:30.269 Slot 1 Firmware Revision: 24.05 00:23:30.269 00:23:30.269 00:23:30.269 Commands Supported and Effects 00:23:30.269 ============================== 00:23:30.269 Admin Commands 00:23:30.269 -------------- 00:23:30.269 Get Log Page (02h): Supported 00:23:30.269 Identify (06h): Supported 00:23:30.269 Abort (08h): Supported 00:23:30.269 Set Features (09h): Supported 00:23:30.269 Get Features (0Ah): Supported 00:23:30.269 Asynchronous Event Request (0Ch): Supported 00:23:30.269 Keep Alive (18h): Supported 00:23:30.269 I/O Commands 00:23:30.269 ------------ 00:23:30.269 Flush (00h): Supported LBA-Change 00:23:30.269 Write (01h): Supported LBA-Change 00:23:30.269 Read (02h): Supported 00:23:30.269 Compare (05h): Supported 00:23:30.269 Write Zeroes (08h): Supported LBA-Change 00:23:30.269 Dataset Management (09h): Supported LBA-Change 00:23:30.269 Copy (19h): Supported LBA-Change 00:23:30.269 Unknown (79h): Supported LBA-Change 00:23:30.269 Unknown (7Ah): Supported 00:23:30.269 00:23:30.269 Error Log 00:23:30.269 ========= 00:23:30.269 00:23:30.269 Arbitration 00:23:30.269 =========== 00:23:30.269 Arbitration Burst: 1 00:23:30.269 00:23:30.269 Power Management 00:23:30.269 ================ 00:23:30.269 Number of Power States: 1 00:23:30.269 Current Power State: Power State #0 00:23:30.269 Power State #0: 00:23:30.269 Max Power: 0.00 W 00:23:30.269 Non-Operational State: Operational 00:23:30.269 Entry Latency: Not Reported 00:23:30.269 Exit Latency: Not Reported 00:23:30.269 Relative Read Throughput: 0 00:23:30.269 Relative Read Latency: 0 00:23:30.269 Relative Write Throughput: 0 00:23:30.269 Relative Write Latency: 0 00:23:30.269 Idle Power: Not Reported 00:23:30.269 Active Power: Not Reported 00:23:30.269 Non-Operational Permissive Mode: Not Supported 00:23:30.269 00:23:30.269 Health Information 00:23:30.269 ================== 00:23:30.269 Critical Warnings: 00:23:30.269 Available Spare Space: OK 00:23:30.269 Temperature: OK 00:23:30.269 Device Reliability: OK 00:23:30.269 Read Only: No 00:23:30.269 Volatile Memory Backup: OK 00:23:30.269 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:30.269 Temperature Threshold: [2024-05-15 02:24:18.254354] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.254362] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe67280) 00:23:30.269 [2024-05-15 02:24:18.254370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.269 [2024-05-15 02:24:18.258400] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb02f0, cid 7, qid 0 00:23:30.269 [2024-05-15 02:24:18.258437] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.269 [2024-05-15 02:24:18.258447] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.269 [2024-05-15 02:24:18.258452] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.258456] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeb02f0) on tqpair=0xe67280 00:23:30.269 [2024-05-15 02:24:18.258512] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:30.269 [2024-05-15 02:24:18.258532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.269 [2024-05-15 02:24:18.258540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.269 [2024-05-15 02:24:18.258547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.269 [2024-05-15 02:24:18.258554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.269 [2024-05-15 02:24:18.258565] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.258570] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.258575] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.269 [2024-05-15 02:24:18.258584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.269 [2024-05-15 02:24:18.258614] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.269 [2024-05-15 02:24:18.258679] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.269 [2024-05-15 02:24:18.258687] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.269 [2024-05-15 02:24:18.258691] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.258695] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.269 [2024-05-15 02:24:18.258704] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.258709] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.258713] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.269 [2024-05-15 02:24:18.258721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.269 [2024-05-15 02:24:18.258745] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.269 [2024-05-15 02:24:18.258828] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.269 [2024-05-15 02:24:18.258835] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.269 [2024-05-15 02:24:18.258839] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.258843] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.269 [2024-05-15 02:24:18.258849] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:30.269 [2024-05-15 02:24:18.258855] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:30.269 [2024-05-15 02:24:18.258866] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.258871] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.258876] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.269 [2024-05-15 02:24:18.258883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.269 [2024-05-15 02:24:18.258903] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.269 [2024-05-15 02:24:18.258961] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.269 [2024-05-15 02:24:18.258968] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.269 [2024-05-15 02:24:18.258972] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.258976] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.269 [2024-05-15 02:24:18.258988] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.258994] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.258998] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.269 [2024-05-15 02:24:18.259006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.269 [2024-05-15 02:24:18.259025] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.269 [2024-05-15 02:24:18.259086] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.269 [2024-05-15 02:24:18.259093] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.269 [2024-05-15 02:24:18.259097] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.259101] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.269 [2024-05-15 02:24:18.259112] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.259117] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.259121] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.269 [2024-05-15 02:24:18.259129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.269 [2024-05-15 02:24:18.259149] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.269 [2024-05-15 02:24:18.259205] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.269 [2024-05-15 02:24:18.259212] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.269 [2024-05-15 02:24:18.259216] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.259220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.269 [2024-05-15 02:24:18.259231] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.259236] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.269 [2024-05-15 02:24:18.259240] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.269 [2024-05-15 02:24:18.259248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.269 [2024-05-15 02:24:18.259267] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.270 [2024-05-15 02:24:18.259326] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.270 [2024-05-15 02:24:18.259333] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.270 [2024-05-15 02:24:18.259337] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259341] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.270 [2024-05-15 02:24:18.259352] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259358] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259362] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.270 [2024-05-15 02:24:18.259369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.270 [2024-05-15 02:24:18.259402] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.270 [2024-05-15 02:24:18.259466] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.270 [2024-05-15 02:24:18.259474] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.270 [2024-05-15 02:24:18.259477] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259482] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.270 [2024-05-15 02:24:18.259494] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259499] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259503] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.270 [2024-05-15 02:24:18.259511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.270 [2024-05-15 02:24:18.259531] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.270 [2024-05-15 02:24:18.259589] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.270 [2024-05-15 02:24:18.259596] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.270 [2024-05-15 02:24:18.259600] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259604] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.270 [2024-05-15 02:24:18.259615] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259621] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259625] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.270 [2024-05-15 02:24:18.259632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.270 [2024-05-15 02:24:18.259652] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.270 [2024-05-15 02:24:18.259712] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.270 [2024-05-15 02:24:18.259720] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.270 [2024-05-15 02:24:18.259723] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259728] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.270 [2024-05-15 02:24:18.259739] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259744] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259748] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.270 [2024-05-15 02:24:18.259755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.270 [2024-05-15 02:24:18.259775] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.270 [2024-05-15 02:24:18.259829] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.270 [2024-05-15 02:24:18.259836] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.270 [2024-05-15 02:24:18.259840] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259844] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.270 [2024-05-15 02:24:18.259855] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259861] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259865] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.270 [2024-05-15 02:24:18.259872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.270 [2024-05-15 02:24:18.259892] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.270 [2024-05-15 02:24:18.259947] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.270 [2024-05-15 02:24:18.259954] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.270 [2024-05-15 02:24:18.259958] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259962] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.270 [2024-05-15 02:24:18.259973] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259979] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.259983] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.270 [2024-05-15 02:24:18.259990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.270 [2024-05-15 02:24:18.260009] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.270 [2024-05-15 02:24:18.260068] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.270 [2024-05-15 02:24:18.260075] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.270 [2024-05-15 02:24:18.260079] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.260083] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.270 [2024-05-15 02:24:18.260094] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.260099] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.260103] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.270 [2024-05-15 02:24:18.260111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.270 [2024-05-15 02:24:18.260132] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.270 [2024-05-15 02:24:18.260190] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.270 [2024-05-15 02:24:18.260197] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.270 [2024-05-15 02:24:18.260201] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.260205] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.270 [2024-05-15 02:24:18.260216] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.260221] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.260225] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.270 [2024-05-15 02:24:18.260233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.270 [2024-05-15 02:24:18.260253] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.270 [2024-05-15 02:24:18.260307] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.270 [2024-05-15 02:24:18.260314] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.270 [2024-05-15 02:24:18.260318] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.260322] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.270 [2024-05-15 02:24:18.260333] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.260338] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.260342] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.270 [2024-05-15 02:24:18.260350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.270 [2024-05-15 02:24:18.260369] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.270 [2024-05-15 02:24:18.260441] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.270 [2024-05-15 02:24:18.260459] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.270 [2024-05-15 02:24:18.260463] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.260467] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.270 [2024-05-15 02:24:18.260479] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.260484] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.260488] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.270 [2024-05-15 02:24:18.260496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.270 [2024-05-15 02:24:18.260517] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.270 [2024-05-15 02:24:18.260573] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.270 [2024-05-15 02:24:18.260580] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.270 [2024-05-15 02:24:18.260584] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.260588] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.270 [2024-05-15 02:24:18.260600] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.270 [2024-05-15 02:24:18.260605] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.260609] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.271 [2024-05-15 02:24:18.260616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.271 [2024-05-15 02:24:18.260636] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.271 [2024-05-15 02:24:18.260692] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.271 [2024-05-15 02:24:18.260699] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.271 [2024-05-15 02:24:18.260702] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.260707] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.271 [2024-05-15 02:24:18.260718] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.260723] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.260727] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.271 [2024-05-15 02:24:18.260735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.271 [2024-05-15 02:24:18.260754] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.271 [2024-05-15 02:24:18.260809] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.271 [2024-05-15 02:24:18.260816] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.271 [2024-05-15 02:24:18.260820] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.260824] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.271 [2024-05-15 02:24:18.260835] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.260841] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.260845] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.271 [2024-05-15 02:24:18.260852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.271 [2024-05-15 02:24:18.260872] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.271 [2024-05-15 02:24:18.260930] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.271 [2024-05-15 02:24:18.260937] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.271 [2024-05-15 02:24:18.260941] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.260945] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.271 [2024-05-15 02:24:18.260956] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.260961] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.260965] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.271 [2024-05-15 02:24:18.260973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.271 [2024-05-15 02:24:18.260993] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.271 [2024-05-15 02:24:18.261048] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.271 [2024-05-15 02:24:18.261055] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.271 [2024-05-15 02:24:18.261059] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261063] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.271 [2024-05-15 02:24:18.261074] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261080] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261084] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.271 [2024-05-15 02:24:18.261091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.271 [2024-05-15 02:24:18.261111] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.271 [2024-05-15 02:24:18.261165] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.271 [2024-05-15 02:24:18.261173] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.271 [2024-05-15 02:24:18.261176] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261181] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.271 [2024-05-15 02:24:18.261192] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261197] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261201] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.271 [2024-05-15 02:24:18.261208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.271 [2024-05-15 02:24:18.261228] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.271 [2024-05-15 02:24:18.261286] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.271 [2024-05-15 02:24:18.261293] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.271 [2024-05-15 02:24:18.261297] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261301] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.271 [2024-05-15 02:24:18.261312] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261317] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261321] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.271 [2024-05-15 02:24:18.261329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.271 [2024-05-15 02:24:18.261348] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.271 [2024-05-15 02:24:18.261416] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.271 [2024-05-15 02:24:18.261425] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.271 [2024-05-15 02:24:18.261429] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261434] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.271 [2024-05-15 02:24:18.261446] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261451] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261455] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.271 [2024-05-15 02:24:18.261473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.271 [2024-05-15 02:24:18.261497] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.271 [2024-05-15 02:24:18.261558] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.271 [2024-05-15 02:24:18.261565] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.271 [2024-05-15 02:24:18.261569] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261573] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.271 [2024-05-15 02:24:18.261591] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261597] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261601] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.271 [2024-05-15 02:24:18.261609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.271 [2024-05-15 02:24:18.261629] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.271 [2024-05-15 02:24:18.261686] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.271 [2024-05-15 02:24:18.261693] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.271 [2024-05-15 02:24:18.261696] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261701] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.271 [2024-05-15 02:24:18.261712] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261717] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261721] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.271 [2024-05-15 02:24:18.261729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.271 [2024-05-15 02:24:18.261749] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.271 [2024-05-15 02:24:18.261804] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.271 [2024-05-15 02:24:18.261811] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.271 [2024-05-15 02:24:18.261815] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261819] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.271 [2024-05-15 02:24:18.261830] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261835] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261839] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.271 [2024-05-15 02:24:18.261847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.271 [2024-05-15 02:24:18.261867] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.271 [2024-05-15 02:24:18.261925] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.271 [2024-05-15 02:24:18.261932] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.271 [2024-05-15 02:24:18.261936] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261940] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.271 [2024-05-15 02:24:18.261951] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261957] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.261961] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.271 [2024-05-15 02:24:18.261968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.271 [2024-05-15 02:24:18.261988] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.271 [2024-05-15 02:24:18.262042] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.271 [2024-05-15 02:24:18.262050] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.271 [2024-05-15 02:24:18.262054] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.262058] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.271 [2024-05-15 02:24:18.262069] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.262074] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.271 [2024-05-15 02:24:18.262078] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.271 [2024-05-15 02:24:18.262086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.271 [2024-05-15 02:24:18.262105] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.272 [2024-05-15 02:24:18.262163] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.272 [2024-05-15 02:24:18.262170] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.272 [2024-05-15 02:24:18.262174] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.272 [2024-05-15 02:24:18.262178] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.272 [2024-05-15 02:24:18.262189] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.272 [2024-05-15 02:24:18.262195] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.272 [2024-05-15 02:24:18.262199] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.272 [2024-05-15 02:24:18.262207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.272 [2024-05-15 02:24:18.262226] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.272 [2024-05-15 02:24:18.262282] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.272 [2024-05-15 02:24:18.262301] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.272 [2024-05-15 02:24:18.262305] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.272 [2024-05-15 02:24:18.262310] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.272 [2024-05-15 02:24:18.262321] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.272 [2024-05-15 02:24:18.262327] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.272 [2024-05-15 02:24:18.262331] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.272 [2024-05-15 02:24:18.262339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.272 [2024-05-15 02:24:18.262359] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.272 [2024-05-15 02:24:18.266405] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.272 [2024-05-15 02:24:18.266426] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.272 [2024-05-15 02:24:18.266431] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.272 [2024-05-15 02:24:18.266436] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.272 [2024-05-15 02:24:18.266451] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.272 [2024-05-15 02:24:18.266457] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.272 [2024-05-15 02:24:18.266461] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67280) 00:23:30.272 [2024-05-15 02:24:18.266470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.272 [2024-05-15 02:24:18.266497] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeafd70, cid 3, qid 0 00:23:30.272 [2024-05-15 02:24:18.266558] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.272 [2024-05-15 02:24:18.266566] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.272 [2024-05-15 02:24:18.266570] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.272 [2024-05-15 02:24:18.266574] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xeafd70) on tqpair=0xe67280 00:23:30.272 [2024-05-15 02:24:18.266583] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:23:30.531 0 Kelvin (-273 Celsius) 00:23:30.531 Available Spare: 0% 00:23:30.531 Available Spare Threshold: 0% 00:23:30.531 Life Percentage Used: 0% 00:23:30.531 Data Units Read: 0 00:23:30.531 Data Units Written: 0 00:23:30.531 Host Read Commands: 0 00:23:30.531 Host Write Commands: 0 00:23:30.531 Controller Busy Time: 0 minutes 00:23:30.531 Power Cycles: 0 00:23:30.531 Power On Hours: 0 hours 00:23:30.531 Unsafe Shutdowns: 0 00:23:30.531 Unrecoverable Media Errors: 0 00:23:30.531 Lifetime Error Log Entries: 0 00:23:30.531 Warning Temperature Time: 0 minutes 00:23:30.531 Critical Temperature Time: 0 minutes 00:23:30.531 00:23:30.531 Number of Queues 00:23:30.531 ================ 00:23:30.531 Number of I/O Submission Queues: 127 00:23:30.531 Number of I/O Completion Queues: 127 00:23:30.531 00:23:30.531 Active Namespaces 00:23:30.531 ================= 00:23:30.531 Namespace ID:1 00:23:30.531 Error Recovery Timeout: Unlimited 00:23:30.531 Command Set Identifier: NVM (00h) 00:23:30.531 Deallocate: Supported 00:23:30.531 Deallocated/Unwritten Error: Not Supported 00:23:30.531 Deallocated Read Value: Unknown 00:23:30.531 Deallocate in Write Zeroes: Not Supported 00:23:30.531 Deallocated Guard Field: 0xFFFF 00:23:30.531 Flush: Supported 00:23:30.531 Reservation: Supported 00:23:30.531 Namespace Sharing Capabilities: Multiple Controllers 00:23:30.531 Size (in LBAs): 131072 (0GiB) 00:23:30.531 Capacity (in LBAs): 131072 (0GiB) 00:23:30.531 Utilization (in LBAs): 131072 (0GiB) 00:23:30.531 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:30.531 EUI64: ABCDEF0123456789 00:23:30.531 UUID: 56917e30-d561-4ea8-a605-933d956b0363 00:23:30.531 Thin Provisioning: Not Supported 00:23:30.531 Per-NS Atomic Units: Yes 00:23:30.531 Atomic Boundary Size (Normal): 0 00:23:30.531 Atomic Boundary Size (PFail): 0 00:23:30.531 Atomic Boundary Offset: 0 00:23:30.531 Maximum Single Source Range Length: 65535 00:23:30.531 Maximum Copy Length: 65535 00:23:30.531 Maximum Source Range Count: 1 00:23:30.531 NGUID/EUI64 Never Reused: No 00:23:30.531 Namespace Write Protected: No 00:23:30.531 Number of LBA Formats: 1 00:23:30.531 Current LBA Format: LBA Format #00 00:23:30.531 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:30.531 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:30.531 rmmod nvme_tcp 00:23:30.531 rmmod nvme_fabrics 00:23:30.531 rmmod nvme_keyring 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 81957 ']' 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 81957 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 81957 ']' 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 81957 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81957 00:23:30.531 killing process with pid 81957 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81957' 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 81957 00:23:30.531 [2024-05-15 02:24:18.414772] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:30.531 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 81957 00:23:30.791 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:30.791 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:30.791 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:30.791 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:30.791 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:30.791 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.791 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.791 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.791 02:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:30.791 00:23:30.791 real 0m1.823s 00:23:30.791 user 0m4.273s 00:23:30.791 sys 0m0.544s 00:23:30.791 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:30.791 02:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.791 ************************************ 00:23:30.791 END TEST nvmf_identify 00:23:30.791 ************************************ 00:23:30.791 02:24:18 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:30.791 02:24:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:30.791 02:24:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:30.791 02:24:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:30.791 ************************************ 00:23:30.791 START TEST nvmf_perf 00:23:30.791 ************************************ 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:30.791 * Looking for test storage... 00:23:30.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:30.791 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:31.050 Cannot find device "nvmf_tgt_br" 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:31.050 Cannot find device "nvmf_tgt_br2" 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:31.050 Cannot find device "nvmf_tgt_br" 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:31.050 Cannot find device "nvmf_tgt_br2" 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:31.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:31.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:31.050 02:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:31.050 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:31.050 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:31.050 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:31.050 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:31.050 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:31.050 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:31.050 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:31.050 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:31.050 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:31.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:23:31.309 00:23:31.309 --- 10.0.0.2 ping statistics --- 00:23:31.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.309 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:31.309 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:31.309 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:23:31.309 00:23:31.309 --- 10.0.0.3 ping statistics --- 00:23:31.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.309 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:31.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:31.309 00:23:31.309 --- 10.0.0.1 ping statistics --- 00:23:31.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.309 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=82151 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 82151 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 82151 ']' 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.309 02:24:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:31.310 02:24:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.310 02:24:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:31.310 02:24:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:31.310 [2024-05-15 02:24:19.222355] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:31.310 [2024-05-15 02:24:19.222721] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.568 [2024-05-15 02:24:19.368502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:31.568 [2024-05-15 02:24:19.429300] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.568 [2024-05-15 02:24:19.429368] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.568 [2024-05-15 02:24:19.429380] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.568 [2024-05-15 02:24:19.429411] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.568 [2024-05-15 02:24:19.429419] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.568 [2024-05-15 02:24:19.430181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.568 [2024-05-15 02:24:19.430347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.568 [2024-05-15 02:24:19.430470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:31.568 [2024-05-15 02:24:19.430529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.503 02:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:32.503 02:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:23:32.503 02:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:32.503 02:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.503 02:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:32.503 02:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.503 02:24:20 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:32.503 02:24:20 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:23:32.762 02:24:20 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:32.762 02:24:20 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:23:33.020 02:24:20 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:23:33.020 02:24:20 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:33.279 02:24:21 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:33.279 02:24:21 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:23:33.279 02:24:21 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:33.279 02:24:21 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:33.279 02:24:21 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:33.536 [2024-05-15 02:24:21.382195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.536 02:24:21 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:33.795 02:24:21 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:33.795 02:24:21 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:34.054 02:24:21 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:34.054 02:24:21 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:34.351 02:24:22 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.630 [2024-05-15 02:24:22.383161] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:34.630 [2024-05-15 02:24:22.383472] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.630 02:24:22 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:34.889 02:24:22 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:34.889 02:24:22 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:34.889 02:24:22 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:34.889 02:24:22 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:35.823 Initializing NVMe Controllers 00:23:35.823 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:35.823 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:35.823 Initialization complete. Launching workers. 00:23:35.823 ======================================================== 00:23:35.823 Latency(us) 00:23:35.823 Device Information : IOPS MiB/s Average min max 00:23:35.823 PCIE (0000:00:10.0) NSID 1 from core 0: 24640.00 96.25 1298.25 357.12 6779.21 00:23:35.823 ======================================================== 00:23:35.823 Total : 24640.00 96.25 1298.25 357.12 6779.21 00:23:35.823 00:23:35.823 02:24:23 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:37.199 Initializing NVMe Controllers 00:23:37.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:37.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:37.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:37.199 Initialization complete. Launching workers. 00:23:37.199 ======================================================== 00:23:37.199 Latency(us) 00:23:37.199 Device Information : IOPS MiB/s Average min max 00:23:37.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3479.77 13.59 287.05 114.92 4240.31 00:23:37.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.99 0.48 8194.24 7940.10 12027.28 00:23:37.199 ======================================================== 00:23:37.199 Total : 3602.77 14.07 556.98 114.92 12027.28 00:23:37.199 00:23:37.199 02:24:25 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:38.577 Initializing NVMe Controllers 00:23:38.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:38.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:38.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:38.577 Initialization complete. Launching workers. 00:23:38.577 ======================================================== 00:23:38.577 Latency(us) 00:23:38.577 Device Information : IOPS MiB/s Average min max 00:23:38.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8580.01 33.52 3730.55 725.70 9237.41 00:23:38.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2679.45 10.47 12060.12 6538.71 23022.03 00:23:38.577 ======================================================== 00:23:38.577 Total : 11259.46 43.98 5712.76 725.70 23022.03 00:23:38.577 00:23:38.577 02:24:26 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:23:38.577 02:24:26 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:41.139 Initializing NVMe Controllers 00:23:41.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:41.139 Controller IO queue size 128, less than required. 00:23:41.139 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:41.139 Controller IO queue size 128, less than required. 00:23:41.139 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:41.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:41.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:41.139 Initialization complete. Launching workers. 00:23:41.139 ======================================================== 00:23:41.139 Latency(us) 00:23:41.139 Device Information : IOPS MiB/s Average min max 00:23:41.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1222.45 305.61 107544.05 56093.48 236328.86 00:23:41.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 542.98 135.74 249420.03 107091.47 394122.26 00:23:41.139 ======================================================== 00:23:41.139 Total : 1765.42 441.36 151179.72 56093.48 394122.26 00:23:41.139 00:23:41.139 02:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:41.407 Initializing NVMe Controllers 00:23:41.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:41.407 Controller IO queue size 128, less than required. 00:23:41.407 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:41.407 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:41.407 Controller IO queue size 128, less than required. 00:23:41.407 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:41.407 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:23:41.407 WARNING: Some requested NVMe devices were skipped 00:23:41.407 No valid NVMe controllers or AIO or URING devices found 00:23:41.407 02:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:43.942 Initializing NVMe Controllers 00:23:43.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.942 Controller IO queue size 128, less than required. 00:23:43.942 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:43.942 Controller IO queue size 128, less than required. 00:23:43.942 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:43.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:43.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:43.942 Initialization complete. Launching workers. 00:23:43.942 00:23:43.942 ==================== 00:23:43.942 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:43.942 TCP transport: 00:23:43.942 polls: 12196 00:23:43.942 idle_polls: 6652 00:23:43.942 sock_completions: 5544 00:23:43.942 nvme_completions: 3329 00:23:43.942 submitted_requests: 4972 00:23:43.942 queued_requests: 1 00:23:43.942 00:23:43.942 ==================== 00:23:43.942 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:43.942 TCP transport: 00:23:43.942 polls: 12093 00:23:43.942 idle_polls: 7968 00:23:43.942 sock_completions: 4125 00:23:43.942 nvme_completions: 7331 00:23:43.942 submitted_requests: 10932 00:23:43.942 queued_requests: 1 00:23:43.942 ======================================================== 00:23:43.942 Latency(us) 00:23:43.942 Device Information : IOPS MiB/s Average min max 00:23:43.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 831.98 207.99 158111.62 92293.45 273415.85 00:23:43.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1832.45 458.11 69785.64 33135.04 119529.35 00:23:43.942 ======================================================== 00:23:43.942 Total : 2664.42 666.11 97365.76 33135.04 273415.85 00:23:43.942 00:23:43.942 02:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:43.942 02:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.200 02:24:32 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:44.200 02:24:32 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:44.200 02:24:32 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:44.200 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:44.200 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:44.200 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:44.200 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:44.200 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:44.200 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:44.200 rmmod nvme_tcp 00:23:44.200 rmmod nvme_fabrics 00:23:44.200 rmmod nvme_keyring 00:23:44.200 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:44.200 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:44.200 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:44.201 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 82151 ']' 00:23:44.201 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 82151 00:23:44.201 02:24:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 82151 ']' 00:23:44.201 02:24:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 82151 00:23:44.201 02:24:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:23:44.201 02:24:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:44.201 02:24:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82151 00:23:44.201 killing process with pid 82151 00:23:44.201 02:24:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:44.201 02:24:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:44.201 02:24:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82151' 00:23:44.201 02:24:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 82151 00:23:44.201 [2024-05-15 02:24:32.164947] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:44.201 02:24:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 82151 00:23:45.136 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:45.136 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:45.136 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:45.136 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.136 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:45.136 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.136 02:24:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.137 02:24:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.137 02:24:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:45.137 ************************************ 00:23:45.137 END TEST nvmf_perf 00:23:45.137 ************************************ 00:23:45.137 00:23:45.137 real 0m14.189s 00:23:45.137 user 0m52.585s 00:23:45.137 sys 0m3.344s 00:23:45.137 02:24:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:45.137 02:24:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:45.137 02:24:32 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:45.137 02:24:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:45.137 02:24:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:45.137 02:24:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:45.137 ************************************ 00:23:45.137 START TEST nvmf_fio_host 00:23:45.137 ************************************ 00:23:45.137 02:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:45.137 * Looking for test storage... 00:23:45.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:45.137 Cannot find device "nvmf_tgt_br" 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:45.137 Cannot find device "nvmf_tgt_br2" 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:23:45.137 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:45.138 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:45.138 Cannot find device "nvmf_tgt_br" 00:23:45.138 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:23:45.138 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:45.138 Cannot find device "nvmf_tgt_br2" 00:23:45.138 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:23:45.138 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:45.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:45.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:45.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:23:45.396 00:23:45.396 --- 10.0.0.2 ping statistics --- 00:23:45.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.396 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:45.396 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:45.396 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:23:45.396 00:23:45.396 --- 10.0.0.3 ping statistics --- 00:23:45.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.396 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:45.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:23:45.396 00:23:45.396 --- 10.0.0.1 ping statistics --- 00:23:45.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.396 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=82546 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 82546 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 82546 ']' 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.396 02:24:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:45.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.397 02:24:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.397 02:24:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:45.397 02:24:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.671 [2024-05-15 02:24:33.458823] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:45.671 [2024-05-15 02:24:33.459103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.671 [2024-05-15 02:24:33.601953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:45.671 [2024-05-15 02:24:33.662937] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.671 [2024-05-15 02:24:33.663000] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.671 [2024-05-15 02:24:33.663013] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.671 [2024-05-15 02:24:33.663022] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.671 [2024-05-15 02:24:33.663029] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.671 [2024-05-15 02:24:33.663181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.671 [2024-05-15 02:24:33.663409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.671 [2024-05-15 02:24:33.664141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:45.671 [2024-05-15 02:24:33.664189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.606 [2024-05-15 02:24:34.446633] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.606 Malloc1 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.606 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.607 [2024-05-15 02:24:34.542614] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:46.607 [2024-05-15 02:24:34.542893] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:46.607 02:24:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:46.866 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:46.866 fio-3.35 00:23:46.866 Starting 1 thread 00:23:49.401 00:23:49.401 test: (groupid=0, jobs=1): err= 0: pid=82613: Wed May 15 02:24:37 2024 00:23:49.401 read: IOPS=8579, BW=33.5MiB/s (35.1MB/s)(67.3MiB/2007msec) 00:23:49.401 slat (usec): min=2, max=409, avg= 2.72, stdev= 3.80 00:23:49.401 clat (usec): min=4029, max=13858, avg=7800.86, stdev=856.29 00:23:49.401 lat (usec): min=4067, max=13861, avg=7803.58, stdev=856.25 00:23:49.401 clat percentiles (usec): 00:23:49.401 | 1.00th=[ 6521], 5.00th=[ 6849], 10.00th=[ 6980], 20.00th=[ 7177], 00:23:49.401 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:23:49.401 | 70.00th=[ 7963], 80.00th=[ 8225], 90.00th=[ 8979], 95.00th=[ 9503], 00:23:49.401 | 99.00th=[10814], 99.50th=[11731], 99.90th=[12649], 99.95th=[13042], 00:23:49.401 | 99.99th=[13829] 00:23:49.401 bw ( KiB/s): min=31512, max=36232, per=99.89%, avg=34282.00, stdev=2107.01, samples=4 00:23:49.401 iops : min= 7878, max= 9058, avg=8570.50, stdev=526.75, samples=4 00:23:49.401 write: IOPS=8571, BW=33.5MiB/s (35.1MB/s)(67.2MiB/2007msec); 0 zone resets 00:23:49.401 slat (usec): min=2, max=230, avg= 2.88, stdev= 2.14 00:23:49.401 clat (usec): min=2686, max=13175, avg=7054.29, stdev=758.90 00:23:49.401 lat (usec): min=2700, max=13177, avg=7057.17, stdev=758.87 00:23:49.401 clat percentiles (usec): 00:23:49.401 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6521], 00:23:49.401 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7046], 00:23:49.401 | 70.00th=[ 7177], 80.00th=[ 7373], 90.00th=[ 7963], 95.00th=[ 8586], 00:23:49.401 | 99.00th=[ 9765], 99.50th=[10683], 99.90th=[11731], 99.95th=[12256], 00:23:49.401 | 99.99th=[13042] 00:23:49.401 bw ( KiB/s): min=31472, max=35728, per=100.00%, avg=34310.00, stdev=1931.51, samples=4 00:23:49.401 iops : min= 7868, max= 8932, avg=8577.50, stdev=482.88, samples=4 00:23:49.401 lat (msec) : 4=0.03%, 10=98.35%, 20=1.62% 00:23:49.401 cpu : usr=66.45%, sys=24.33%, ctx=25, majf=0, minf=5 00:23:49.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:49.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:49.401 issued rwts: total=17220,17203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:49.401 00:23:49.401 Run status group 0 (all jobs): 00:23:49.401 READ: bw=33.5MiB/s (35.1MB/s), 33.5MiB/s-33.5MiB/s (35.1MB/s-35.1MB/s), io=67.3MiB (70.5MB), run=2007-2007msec 00:23:49.401 WRITE: bw=33.5MiB/s (35.1MB/s), 33.5MiB/s-33.5MiB/s (35.1MB/s-35.1MB/s), io=67.2MiB (70.5MB), run=2007-2007msec 00:23:49.401 02:24:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:49.401 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:49.401 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:49.401 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:49.401 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:49.402 02:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:49.402 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:49.402 fio-3.35 00:23:49.402 Starting 1 thread 00:23:51.933 00:23:51.933 test: (groupid=0, jobs=1): err= 0: pid=82649: Wed May 15 02:24:39 2024 00:23:51.933 read: IOPS=7647, BW=119MiB/s (125MB/s)(240MiB/2007msec) 00:23:51.933 slat (usec): min=3, max=120, avg= 4.04, stdev= 1.84 00:23:51.933 clat (usec): min=2361, max=20388, avg=9821.83, stdev=2353.83 00:23:51.933 lat (usec): min=2364, max=20392, avg=9825.87, stdev=2353.92 00:23:51.933 clat percentiles (usec): 00:23:51.933 | 1.00th=[ 5014], 5.00th=[ 6128], 10.00th=[ 6718], 20.00th=[ 7635], 00:23:51.933 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10552], 00:23:51.933 | 70.00th=[11207], 80.00th=[11731], 90.00th=[12649], 95.00th=[13960], 00:23:51.933 | 99.00th=[15533], 99.50th=[16057], 99.90th=[17695], 99.95th=[18220], 00:23:51.933 | 99.99th=[19006] 00:23:51.933 bw ( KiB/s): min=52224, max=73664, per=51.43%, avg=62936.00, stdev=9123.36, samples=4 00:23:51.933 iops : min= 3264, max= 4604, avg=3933.50, stdev=570.21, samples=4 00:23:51.933 write: IOPS=4586, BW=71.7MiB/s (75.1MB/s)(129MiB/1802msec); 0 zone resets 00:23:51.933 slat (usec): min=37, max=173, avg=39.98, stdev= 4.61 00:23:51.933 clat (usec): min=5166, max=20910, avg=12104.58, stdev=2394.95 00:23:51.933 lat (usec): min=5204, max=20948, avg=12144.56, stdev=2395.23 00:23:51.933 clat percentiles (usec): 00:23:51.933 | 1.00th=[ 7898], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10028], 00:23:51.933 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:23:51.933 | 70.00th=[13042], 80.00th=[14091], 90.00th=[15664], 95.00th=[16712], 00:23:51.933 | 99.00th=[18482], 99.50th=[19792], 99.90th=[20317], 99.95th=[20317], 00:23:51.933 | 99.99th=[20841] 00:23:51.933 bw ( KiB/s): min=53344, max=76896, per=89.26%, avg=65504.00, stdev=10037.77, samples=4 00:23:51.933 iops : min= 3334, max= 4806, avg=4094.00, stdev=627.36, samples=4 00:23:51.933 lat (msec) : 4=0.17%, 10=39.96%, 20=59.79%, 50=0.08% 00:23:51.933 cpu : usr=71.54%, sys=18.59%, ctx=7, majf=0, minf=22 00:23:51.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:51.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:51.933 issued rwts: total=15349,8265,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:51.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:51.933 00:23:51.933 Run status group 0 (all jobs): 00:23:51.933 READ: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=240MiB (251MB), run=2007-2007msec 00:23:51.933 WRITE: bw=71.7MiB/s (75.1MB/s), 71.7MiB/s-71.7MiB/s (75.1MB/s-75.1MB/s), io=129MiB (135MB), run=1802-1802msec 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:51.933 rmmod nvme_tcp 00:23:51.933 rmmod nvme_fabrics 00:23:51.933 rmmod nvme_keyring 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 82546 ']' 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 82546 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 82546 ']' 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 82546 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82546 00:23:51.933 killing process with pid 82546 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82546' 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 82546 00:23:51.933 [2024-05-15 02:24:39.650984] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 82546 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:51.933 00:23:51.933 real 0m6.951s 00:23:51.933 user 0m27.287s 00:23:51.933 sys 0m2.029s 00:23:51.933 ************************************ 00:23:51.933 END TEST nvmf_fio_host 00:23:51.933 ************************************ 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:51.933 02:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.933 02:24:39 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:51.933 02:24:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:51.933 02:24:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:51.933 02:24:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:51.933 ************************************ 00:23:51.933 START TEST nvmf_failover 00:23:51.933 ************************************ 00:23:51.933 02:24:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:52.190 * Looking for test storage... 00:23:52.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.190 02:24:40 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:52.191 Cannot find device "nvmf_tgt_br" 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:52.191 Cannot find device "nvmf_tgt_br2" 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:52.191 Cannot find device "nvmf_tgt_br" 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:52.191 Cannot find device "nvmf_tgt_br2" 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:52.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:52.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:52.191 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:52.448 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:52.448 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:52.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:23:52.449 00:23:52.449 --- 10.0.0.2 ping statistics --- 00:23:52.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.449 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:52.449 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:52.449 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:23:52.449 00:23:52.449 --- 10.0.0.3 ping statistics --- 00:23:52.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.449 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:52.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:23:52.449 00:23:52.449 --- 10.0.0.1 ping statistics --- 00:23:52.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.449 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=82839 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 82839 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 82839 ']' 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:52.449 02:24:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:52.708 [2024-05-15 02:24:40.516876] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:52.708 [2024-05-15 02:24:40.516990] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.708 [2024-05-15 02:24:40.655916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:52.969 [2024-05-15 02:24:40.727753] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.969 [2024-05-15 02:24:40.727905] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.969 [2024-05-15 02:24:40.727917] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.969 [2024-05-15 02:24:40.727926] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.969 [2024-05-15 02:24:40.727934] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.969 [2024-05-15 02:24:40.728054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.969 [2024-05-15 02:24:40.728182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:52.969 [2024-05-15 02:24:40.728192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.905 02:24:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:53.905 02:24:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:23:53.905 02:24:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.905 02:24:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.905 02:24:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:53.905 02:24:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.905 02:24:41 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:53.905 [2024-05-15 02:24:41.815643] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.905 02:24:41 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:54.162 Malloc0 00:23:54.163 02:24:42 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:54.420 02:24:42 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:54.678 02:24:42 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.936 [2024-05-15 02:24:42.845494] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:54.936 [2024-05-15 02:24:42.845803] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.936 02:24:42 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:55.194 [2024-05-15 02:24:43.081863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:55.194 02:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:55.452 [2024-05-15 02:24:43.318077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:55.452 02:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=82933 00:23:55.452 02:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:55.452 02:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:55.452 02:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 82933 /var/tmp/bdevperf.sock 00:23:55.452 02:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 82933 ']' 00:23:55.452 02:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.452 02:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:55.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.453 02:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.453 02:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:55.453 02:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:55.711 02:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:55.711 02:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:23:55.711 02:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:55.968 NVMe0n1 00:23:56.227 02:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:56.483 00:23:56.483 02:24:44 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=82961 00:23:56.483 02:24:44 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:56.483 02:24:44 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:57.413 02:24:45 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.671 [2024-05-15 02:24:45.661804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b33d0 is same with the state(5) to be set 00:23:57.671 [2024-05-15 02:24:45.662422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b33d0 is same with the state(5) to be set 00:23:57.671 [2024-05-15 02:24:45.662559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b33d0 is same with the state(5) to be set 00:23:57.671 [2024-05-15 02:24:45.662632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b33d0 is same with the state(5) to be set 00:23:57.671 [2024-05-15 02:24:45.662708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b33d0 is same with the state(5) to be set 00:23:57.671 [2024-05-15 02:24:45.662783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b33d0 is same with the state(5) to be set 00:23:57.671 [2024-05-15 02:24:45.662860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b33d0 is same with the state(5) to be set 00:23:57.671 [2024-05-15 02:24:45.662925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b33d0 is same with the state(5) to be set 00:23:57.671 [2024-05-15 02:24:45.663006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b33d0 is same with the state(5) to be set 00:23:57.671 [2024-05-15 02:24:45.663079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b33d0 is same with the state(5) to be set 00:23:57.671 [2024-05-15 02:24:45.663150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b33d0 is same with the state(5) to be set 00:23:57.671 [2024-05-15 02:24:45.663223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b33d0 is same with the state(5) to be set 00:23:57.928 02:24:45 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:01.209 02:24:48 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:01.209 00:24:01.209 02:24:49 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:01.468 [2024-05-15 02:24:49.295428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.295883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.295927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.295945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.295955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.295964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.295972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.295981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.295989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.295997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.468 [2024-05-15 02:24:49.296408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.296994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 [2024-05-15 02:24:49.297003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3f70 is same with the state(5) to be set 00:24:01.469 02:24:49 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:04.754 02:24:52 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:04.754 [2024-05-15 02:24:52.585817] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.754 02:24:52 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:05.686 02:24:53 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:05.944 [2024-05-15 02:24:53.892319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210b900 is same with the state(5) to be set 00:24:05.944 [2024-05-15 02:24:53.892402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210b900 is same with the state(5) to be set 00:24:05.944 [2024-05-15 02:24:53.892424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210b900 is same with the state(5) to be set 00:24:05.945 [2024-05-15 02:24:53.892440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210b900 is same with the state(5) to be set 00:24:05.945 [2024-05-15 02:24:53.892454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210b900 is same with the state(5) to be set 00:24:05.945 [2024-05-15 02:24:53.892469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210b900 is same with the state(5) to be set 00:24:05.945 02:24:53 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 82961 00:24:12.513 0 00:24:12.513 02:24:59 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 82933 00:24:12.513 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 82933 ']' 00:24:12.513 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 82933 00:24:12.513 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:12.513 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:12.513 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82933 00:24:12.513 killing process with pid 82933 00:24:12.513 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:12.513 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:12.513 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82933' 00:24:12.513 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 82933 00:24:12.513 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 82933 00:24:12.513 02:24:59 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:12.513 [2024-05-15 02:24:43.385480] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:12.513 [2024-05-15 02:24:43.385599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82933 ] 00:24:12.513 [2024-05-15 02:24:43.520323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.513 [2024-05-15 02:24:43.621734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.513 Running I/O for 15 seconds... 00:24:12.513 [2024-05-15 02:24:45.664010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.664977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.664997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.513 [2024-05-15 02:24:45.665953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.513 [2024-05-15 02:24:45.665973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.665990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.666028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.666073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.666111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.666148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.666186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.666223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.666261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.666300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.666337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.666969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.666989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.667007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.667055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.667093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.667130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.667168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.667205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.514 [2024-05-15 02:24:45.667243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.667281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.667318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.667355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.667407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.667446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.667484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.667526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.667581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.514 [2024-05-15 02:24:45.667619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.514 [2024-05-15 02:24:45.667638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.667656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.667678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.667696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.667716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.667733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.667753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.667770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.667790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.667808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.667828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.667845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.667865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.667882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.667902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.667920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.667940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.667958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.667977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.667995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.515 [2024-05-15 02:24:45.668380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.515 [2024-05-15 02:24:45.668434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.515 [2024-05-15 02:24:45.668472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.515 [2024-05-15 02:24:45.668509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.515 [2024-05-15 02:24:45.668556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.515 [2024-05-15 02:24:45.668593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.515 [2024-05-15 02:24:45.668631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.668971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.668991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.669016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.669037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.669055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.669074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.515 [2024-05-15 02:24:45.669092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.669111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2234830 is same with the state(5) to be set 00:24:12.515 [2024-05-15 02:24:45.669132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.515 [2024-05-15 02:24:45.669146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.515 [2024-05-15 02:24:45.669160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85696 len:8 PRP1 0x0 PRP2 0x0 00:24:12.515 [2024-05-15 02:24:45.669177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.669230] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2234830 was disconnected and freed. reset controller. 00:24:12.515 [2024-05-15 02:24:45.669253] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:12.515 [2024-05-15 02:24:45.669318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.515 [2024-05-15 02:24:45.669344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.669364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.515 [2024-05-15 02:24:45.669382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.669416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.515 [2024-05-15 02:24:45.669434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.669452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.515 [2024-05-15 02:24:45.669470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:45.669500] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:12.515 [2024-05-15 02:24:45.673581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:12.515 [2024-05-15 02:24:45.673625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c55f0 (9): Bad file descriptor 00:24:12.515 [2024-05-15 02:24:45.705788] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:12.515 [2024-05-15 02:24:49.297266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.515 [2024-05-15 02:24:49.297311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:49.297339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.515 [2024-05-15 02:24:49.297378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:49.297416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.515 [2024-05-15 02:24:49.297442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.515 [2024-05-15 02:24:49.297458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.515 [2024-05-15 02:24:49.297472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.297518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.297549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.297579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.297608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.297638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.297667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.297697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.297727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.297757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.297786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.297825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.297856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.297891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.297932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.297974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.297991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.516 [2024-05-15 02:24:49.298789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.516 [2024-05-15 02:24:49.298804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.298820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.517 [2024-05-15 02:24:49.298834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.298850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.517 [2024-05-15 02:24:49.298864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.298880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.517 [2024-05-15 02:24:49.298894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.298911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.517 [2024-05-15 02:24:49.298938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.298958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.517 [2024-05-15 02:24:49.298972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.298988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.517 [2024-05-15 02:24:49.299002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.517 [2024-05-15 02:24:49.299032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.517 [2024-05-15 02:24:49.299070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.517 [2024-05-15 02:24:49.299100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.517 [2024-05-15 02:24:49.299130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.517 [2024-05-15 02:24:49.299160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.299971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.299987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.300001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.300017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.300031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.300046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.300060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.300076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.300090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.300106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.300120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.300135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.300149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.300165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.300179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.300195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.300208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.300224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.300238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.300261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.300285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.300301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.300314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.300331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.300345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.300360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.300374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.300402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.300418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.300434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.517 [2024-05-15 02:24:49.300448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.517 [2024-05-15 02:24:49.300464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.518 [2024-05-15 02:24:49.300480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.518 [2024-05-15 02:24:49.300511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.518 [2024-05-15 02:24:49.300541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.518 [2024-05-15 02:24:49.300570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.518 [2024-05-15 02:24:49.300600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.518 [2024-05-15 02:24:49.300630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.518 [2024-05-15 02:24:49.300666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.518 [2024-05-15 02:24:49.300697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.300727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.300757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.300787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.300817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.300847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.300879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.300913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.300956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.300972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.300988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.301024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.301054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.301092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.301122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.301152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.301182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.301211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.301241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.301270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.301300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.301330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.301359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.301412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:49.301444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df300 is same with the state(5) to be set 00:24:12.518 [2024-05-15 02:24:49.301476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.518 [2024-05-15 02:24:49.301509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.518 [2024-05-15 02:24:49.301526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81520 len:8 PRP1 0x0 PRP2 0x0 00:24:12.518 [2024-05-15 02:24:49.301540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301594] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23df300 was disconnected and freed. reset controller. 00:24:12.518 [2024-05-15 02:24:49.301613] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:12.518 [2024-05-15 02:24:49.301672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.518 [2024-05-15 02:24:49.301694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.518 [2024-05-15 02:24:49.301722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.518 [2024-05-15 02:24:49.301750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.518 [2024-05-15 02:24:49.301778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:49.301791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:12.518 [2024-05-15 02:24:49.301827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c55f0 (9): Bad file descriptor 00:24:12.518 [2024-05-15 02:24:49.305908] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:12.518 [2024-05-15 02:24:49.353920] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:12.518 [2024-05-15 02:24:53.894076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:53.894191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:53.894222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:53.894238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:53.894255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:53.894268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:53.894285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:53.894299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:53.894315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:53.894328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:53.894371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:53.894402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:53.894421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.518 [2024-05-15 02:24:53.894435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:53.894451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.518 [2024-05-15 02:24:53.894464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:53.894479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.518 [2024-05-15 02:24:53.894493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:53.894509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.518 [2024-05-15 02:24:53.894523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:53.894538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.518 [2024-05-15 02:24:53.894552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.518 [2024-05-15 02:24:53.894567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.518 [2024-05-15 02:24:53.894580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.894596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.894609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.894625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.894638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.894653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.894667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.894682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.894695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.894711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.894725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.894741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.894764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.894780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.894794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.894811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.894826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.894841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.894855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.894871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.894885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.894900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.894914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.894930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.894943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.894959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.894973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.894988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.519 [2024-05-15 02:24:53.895185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.519 [2024-05-15 02:24:53.895214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.519 [2024-05-15 02:24:53.895494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.519 [2024-05-15 02:24:53.895524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.519 [2024-05-15 02:24:53.895562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.519 [2024-05-15 02:24:53.895591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.519 [2024-05-15 02:24:53.895621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.519 [2024-05-15 02:24:53.895650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.519 [2024-05-15 02:24:53.895681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.895975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.895989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.896005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.896019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.896034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.896048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.896063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.896077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.896093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.896107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.519 [2024-05-15 02:24:53.896122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.519 [2024-05-15 02:24:53.896136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.896983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.896997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.897013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.897027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.897043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.897057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.897072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.897092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.897109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.897123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.897139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.520 [2024-05-15 02:24:53.897153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.897189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.520 [2024-05-15 02:24:53.897205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7888 len:8 PRP1 0x0 PRP2 0x0 00:24:12.520 [2024-05-15 02:24:53.897218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.897237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.520 [2024-05-15 02:24:53.897248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.520 [2024-05-15 02:24:53.897259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7896 len:8 PRP1 0x0 PRP2 0x0 00:24:12.520 [2024-05-15 02:24:53.897272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.897286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.520 [2024-05-15 02:24:53.897296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.520 [2024-05-15 02:24:53.897306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:8 PRP1 0x0 PRP2 0x0 00:24:12.520 [2024-05-15 02:24:53.897319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.897333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.520 [2024-05-15 02:24:53.897344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.520 [2024-05-15 02:24:53.897354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7912 len:8 PRP1 0x0 PRP2 0x0 00:24:12.520 [2024-05-15 02:24:53.897367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.897381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.520 [2024-05-15 02:24:53.897403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.520 [2024-05-15 02:24:53.897415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7920 len:8 PRP1 0x0 PRP2 0x0 00:24:12.520 [2024-05-15 02:24:53.897429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.897443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.520 [2024-05-15 02:24:53.897453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.520 [2024-05-15 02:24:53.897465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7928 len:8 PRP1 0x0 PRP2 0x0 00:24:12.520 [2024-05-15 02:24:53.897478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.520 [2024-05-15 02:24:53.897502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.520 [2024-05-15 02:24:53.897514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.520 [2024-05-15 02:24:53.897533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.897548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.897561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.897572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.897582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7944 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.897596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.897609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.897619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.897630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7952 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.897643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.897657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.897667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.897677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7960 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.897691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.897704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.897714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.897725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.897738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.897752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.897762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.897773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7976 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.897786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.897799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.897809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.897820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7984 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.897833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.897846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.897856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.897866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7992 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.897879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.897893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.897909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.897920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.897933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.897947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.897957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.897967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8008 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.897981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8016 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8024 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8040 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8048 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8056 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8072 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7184 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7192 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7208 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7216 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7224 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:12.521 [2024-05-15 02:24:53.898754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:12.521 [2024-05-15 02:24:53.898765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7240 len:8 PRP1 0x0 PRP2 0x0 00:24:12.521 [2024-05-15 02:24:53.898783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898832] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2239d60 was disconnected and freed. reset controller. 00:24:12.521 [2024-05-15 02:24:53.898852] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:12.521 [2024-05-15 02:24:53.898907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.521 [2024-05-15 02:24:53.898928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.521 [2024-05-15 02:24:53.898957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.521 [2024-05-15 02:24:53.898985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.898999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.521 [2024-05-15 02:24:53.899012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.521 [2024-05-15 02:24:53.899026] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:12.521 [2024-05-15 02:24:53.899079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c55f0 (9): Bad file descriptor 00:24:12.521 [2024-05-15 02:24:53.903049] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:12.521 [2024-05-15 02:24:53.935724] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:12.521 00:24:12.521 Latency(us) 00:24:12.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.521 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:12.521 Verification LBA range: start 0x0 length 0x4000 00:24:12.521 NVMe0n1 : 15.01 8575.32 33.50 210.82 0.00 14534.94 618.12 23592.96 00:24:12.521 =================================================================================================================== 00:24:12.522 Total : 8575.32 33.50 210.82 0.00 14534.94 618.12 23592.96 00:24:12.522 Received shutdown signal, test time was about 15.000000 seconds 00:24:12.522 00:24:12.522 Latency(us) 00:24:12.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.522 =================================================================================================================== 00:24:12.522 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.522 02:24:59 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:12.522 02:24:59 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:12.522 02:24:59 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:12.522 02:24:59 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=83075 00:24:12.522 02:24:59 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 83075 /var/tmp/bdevperf.sock 00:24:12.522 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 83075 ']' 00:24:12.522 02:24:59 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:12.522 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.522 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:12.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.522 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.522 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:12.522 02:24:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:12.522 02:25:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:12.522 02:25:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:12.522 02:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:12.522 [2024-05-15 02:25:00.284697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:12.522 02:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:12.782 [2024-05-15 02:25:00.581007] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:12.783 02:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:13.043 NVMe0n1 00:24:13.043 02:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:13.301 00:24:13.301 02:25:01 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:13.869 00:24:13.869 02:25:01 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:13.869 02:25:01 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:14.140 02:25:01 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:14.401 02:25:02 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:17.684 02:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:17.684 02:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:17.684 02:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=83168 00:24:17.684 02:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:17.684 02:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 83168 00:24:19.059 0 00:24:19.059 02:25:06 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:19.059 [2024-05-15 02:24:59.752922] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:19.059 [2024-05-15 02:24:59.753026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83075 ] 00:24:19.059 [2024-05-15 02:24:59.888311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.059 [2024-05-15 02:24:59.947918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.059 [2024-05-15 02:25:02.219675] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:19.059 [2024-05-15 02:25:02.219827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.059 [2024-05-15 02:25:02.219867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.059 [2024-05-15 02:25:02.219897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.059 [2024-05-15 02:25:02.219923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.059 [2024-05-15 02:25:02.219947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.059 [2024-05-15 02:25:02.219971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.059 [2024-05-15 02:25:02.219994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.059 [2024-05-15 02:25:02.220018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.059 [2024-05-15 02:25:02.220043] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.059 [2024-05-15 02:25:02.220114] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.059 [2024-05-15 02:25:02.220159] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e85f0 (9): Bad file descriptor 00:24:19.059 [2024-05-15 02:25:02.229276] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:19.059 Running I/O for 1 seconds... 00:24:19.059 00:24:19.059 Latency(us) 00:24:19.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.059 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:19.059 Verification LBA range: start 0x0 length 0x4000 00:24:19.059 NVMe0n1 : 1.01 7997.19 31.24 0.00 0.00 15910.77 2323.55 23354.65 00:24:19.059 =================================================================================================================== 00:24:19.059 Total : 7997.19 31.24 0.00 0.00 15910.77 2323.55 23354.65 00:24:19.059 02:25:06 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:19.059 02:25:06 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:19.318 02:25:07 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:19.576 02:25:07 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:19.576 02:25:07 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:19.834 02:25:07 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:20.092 02:25:07 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:23.377 02:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:23.377 02:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:23.377 02:25:11 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 83075 00:24:23.377 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 83075 ']' 00:24:23.377 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 83075 00:24:23.377 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:23.377 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:23.377 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83075 00:24:23.377 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:23.377 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:23.378 killing process with pid 83075 00:24:23.378 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83075' 00:24:23.378 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 83075 00:24:23.378 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 83075 00:24:23.378 02:25:11 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:23.635 02:25:11 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:23.893 02:25:11 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:23.893 02:25:11 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:23.893 02:25:11 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:23.893 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:23.893 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:23.893 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.894 rmmod nvme_tcp 00:24:23.894 rmmod nvme_fabrics 00:24:23.894 rmmod nvme_keyring 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 82839 ']' 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 82839 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 82839 ']' 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 82839 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82839 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:23.894 killing process with pid 82839 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82839' 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 82839 00:24:23.894 [2024-05-15 02:25:11.756797] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:23.894 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 82839 00:24:24.152 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:24.152 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:24.152 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:24.152 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:24.152 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:24.152 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.152 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.152 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.152 02:25:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:24.152 00:24:24.152 real 0m32.056s 00:24:24.152 user 2m4.939s 00:24:24.152 sys 0m4.554s 00:24:24.152 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:24.152 02:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:24.152 ************************************ 00:24:24.152 END TEST nvmf_failover 00:24:24.152 ************************************ 00:24:24.152 02:25:12 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:24.152 02:25:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:24.152 02:25:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:24.152 02:25:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:24.152 ************************************ 00:24:24.152 START TEST nvmf_host_discovery 00:24:24.152 ************************************ 00:24:24.152 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:24.152 * Looking for test storage... 00:24:24.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:24.152 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:24.152 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:24.152 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.152 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.152 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.152 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.152 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.152 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.152 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.152 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.152 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:24.153 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:24.412 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:24.412 Cannot find device "nvmf_tgt_br" 00:24:24.412 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:24:24.412 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:24.412 Cannot find device "nvmf_tgt_br2" 00:24:24.412 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:24.413 Cannot find device "nvmf_tgt_br" 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:24.413 Cannot find device "nvmf_tgt_br2" 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:24.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:24.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:24.413 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:24.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:24:24.672 00:24:24.672 --- 10.0.0.2 ping statistics --- 00:24:24.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.672 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:24.672 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:24.672 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:24:24.672 00:24:24.672 --- 10.0.0.3 ping statistics --- 00:24:24.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.672 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:24.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:24:24.672 00:24:24.672 --- 10.0.0.1 ping statistics --- 00:24:24.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.672 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=83429 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 83429 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 83429 ']' 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:24.672 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.672 [2024-05-15 02:25:12.594456] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:24.672 [2024-05-15 02:25:12.594550] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.931 [2024-05-15 02:25:12.728116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.931 [2024-05-15 02:25:12.798710] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.931 [2024-05-15 02:25:12.798765] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.931 [2024-05-15 02:25:12.798778] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.931 [2024-05-15 02:25:12.798788] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.931 [2024-05-15 02:25:12.798797] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.931 [2024-05-15 02:25:12.798825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.931 [2024-05-15 02:25:12.931645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.931 [2024-05-15 02:25:12.939552] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:24.931 [2024-05-15 02:25:12.939821] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.931 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.190 null0 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.190 null1 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=83461 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 83461 /tmp/host.sock 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 83461 ']' 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:25.190 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:25.190 02:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.190 [2024-05-15 02:25:13.045716] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:25.190 [2024-05-15 02:25:13.045833] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83461 ] 00:24:25.190 [2024-05-15 02:25:13.192426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.449 [2024-05-15 02:25:13.265585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.384 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:26.384 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:24:26.384 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:26.384 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:26.384 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.384 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.384 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.384 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:26.384 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.384 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.384 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.384 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:26.384 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:26.384 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.384 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.385 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.644 [2024-05-15 02:25:14.484367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:26.644 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:24:26.903 02:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:24:27.161 [2024-05-15 02:25:15.107281] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:27.161 [2024-05-15 02:25:15.107336] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:27.161 [2024-05-15 02:25:15.107358] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:27.423 [2024-05-15 02:25:15.195479] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:27.423 [2024-05-15 02:25:15.258479] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:27.423 [2024-05-15 02:25:15.258527] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:28.008 02:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:28.008 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.267 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:28.267 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.267 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:28.267 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:28.267 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:28.267 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:28.267 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.267 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.267 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:28.267 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:28.267 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:28.267 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.267 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.267 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:28.267 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.268 [2024-05-15 02:25:16.099277] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:28.268 [2024-05-15 02:25:16.099715] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:28.268 [2024-05-15 02:25:16.099759] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:28.268 [2024-05-15 02:25:16.185775] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.268 [2024-05-15 02:25:16.249527] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:28.268 [2024-05-15 02:25:16.249578] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:28.268 [2024-05-15 02:25:16.249586] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:28.268 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.528 [2024-05-15 02:25:16.395995] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:28.528 [2024-05-15 02:25:16.396037] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:28.528 [2024-05-15 02:25:16.396112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.528 [2024-05-15 02:25:16.396146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.528 [2024-05-15 02:25:16.396159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.528 [2024-05-15 02:25:16.396169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.528 [2024-05-15 02:25:16.396179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.528 [2024-05-15 02:25:16.396189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.528 [2024-05-15 02:25:16.396199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.528 [2024-05-15 02:25:16.396208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.528 [2024-05-15 02:25:16.396217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe00390 is same with the state(5) to be set 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:28.528 [2024-05-15 02:25:16.406061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00390 (9): Bad file descriptor 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.528 [2024-05-15 02:25:16.416089] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:28.528 [2024-05-15 02:25:16.416241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.528 [2024-05-15 02:25:16.416295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.528 [2024-05-15 02:25:16.416313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe00390 with addr=10.0.0.2, port=4420 00:24:28.528 [2024-05-15 02:25:16.416325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe00390 is same with the state(5) to be set 00:24:28.528 [2024-05-15 02:25:16.416343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00390 (9): Bad file descriptor 00:24:28.528 [2024-05-15 02:25:16.416360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:28.528 [2024-05-15 02:25:16.416370] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:28.528 [2024-05-15 02:25:16.416381] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:28.528 [2024-05-15 02:25:16.416416] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.528 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.528 [2024-05-15 02:25:16.426168] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:28.528 [2024-05-15 02:25:16.426265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.528 [2024-05-15 02:25:16.426312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.528 [2024-05-15 02:25:16.426328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe00390 with addr=10.0.0.2, port=4420 00:24:28.528 [2024-05-15 02:25:16.426339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe00390 is same with the state(5) to be set 00:24:28.528 [2024-05-15 02:25:16.426355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00390 (9): Bad file descriptor 00:24:28.528 [2024-05-15 02:25:16.426371] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:28.528 [2024-05-15 02:25:16.426381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:28.528 [2024-05-15 02:25:16.426405] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:28.528 [2024-05-15 02:25:16.426421] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.528 [2024-05-15 02:25:16.436229] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:28.528 [2024-05-15 02:25:16.436325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.528 [2024-05-15 02:25:16.436373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.528 [2024-05-15 02:25:16.436403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe00390 with addr=10.0.0.2, port=4420 00:24:28.528 [2024-05-15 02:25:16.436416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe00390 is same with the state(5) to be set 00:24:28.528 [2024-05-15 02:25:16.436433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00390 (9): Bad file descriptor 00:24:28.528 [2024-05-15 02:25:16.436455] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:28.528 [2024-05-15 02:25:16.436465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:28.528 [2024-05-15 02:25:16.436474] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:28.528 [2024-05-15 02:25:16.436490] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.528 [2024-05-15 02:25:16.446290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:28.528 [2024-05-15 02:25:16.446399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.529 [2024-05-15 02:25:16.446451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.529 [2024-05-15 02:25:16.446468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe00390 with addr=10.0.0.2, port=4420 00:24:28.529 [2024-05-15 02:25:16.446478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe00390 is same with the state(5) to be set 00:24:28.529 [2024-05-15 02:25:16.446494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00390 (9): Bad file descriptor 00:24:28.529 [2024-05-15 02:25:16.446510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:28.529 [2024-05-15 02:25:16.446519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:28.529 [2024-05-15 02:25:16.446529] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:28.529 [2024-05-15 02:25:16.446545] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.529 [2024-05-15 02:25:16.456353] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:28.529 [2024-05-15 02:25:16.456457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.529 [2024-05-15 02:25:16.456506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.529 [2024-05-15 02:25:16.456522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe00390 with addr=10.0.0.2, port=4420 00:24:28.529 [2024-05-15 02:25:16.456532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe00390 is same with the state(5) to be set 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.529 [2024-05-15 02:25:16.456548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00390 (9): Bad file descriptor 00:24:28.529 [2024-05-15 02:25:16.456563] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:28.529 [2024-05-15 02:25:16.456573] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:28.529 [2024-05-15 02:25:16.456583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:28.529 [2024-05-15 02:25:16.456598] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.529 [2024-05-15 02:25:16.466421] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:28.529 [2024-05-15 02:25:16.466519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.529 [2024-05-15 02:25:16.466566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.529 [2024-05-15 02:25:16.466582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe00390 with addr=10.0.0.2, port=4420 00:24:28.529 [2024-05-15 02:25:16.466593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe00390 is same with the state(5) to be set 00:24:28.529 [2024-05-15 02:25:16.466610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00390 (9): Bad file descriptor 00:24:28.529 [2024-05-15 02:25:16.466624] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:28.529 [2024-05-15 02:25:16.466634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:28.529 [2024-05-15 02:25:16.466644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:28.529 [2024-05-15 02:25:16.466659] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.529 [2024-05-15 02:25:16.476473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:28.529 [2024-05-15 02:25:16.476557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.529 [2024-05-15 02:25:16.476602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.529 [2024-05-15 02:25:16.476618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe00390 with addr=10.0.0.2, port=4420 00:24:28.529 [2024-05-15 02:25:16.476628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe00390 is same with the state(5) to be set 00:24:28.529 [2024-05-15 02:25:16.476644] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00390 (9): Bad file descriptor 00:24:28.529 [2024-05-15 02:25:16.476659] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:28.529 [2024-05-15 02:25:16.476677] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:28.529 [2024-05-15 02:25:16.476687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:28.529 [2024-05-15 02:25:16.476701] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.529 [2024-05-15 02:25:16.482064] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:28.529 [2024-05-15 02:25:16.482095] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.529 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:28.788 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.789 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.047 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:29.047 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:29.047 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:29.047 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:29.047 02:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:29.047 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.047 02:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.984 [2024-05-15 02:25:17.857651] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:29.984 [2024-05-15 02:25:17.857695] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:29.984 [2024-05-15 02:25:17.857716] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:29.984 [2024-05-15 02:25:17.943781] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:30.243 [2024-05-15 02:25:18.003398] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:30.243 [2024-05-15 02:25:18.003481] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.243 2024/05/15 02:25:18 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:24:30.243 request: 00:24:30.243 { 00:24:30.243 "method": "bdev_nvme_start_discovery", 00:24:30.243 "params": { 00:24:30.243 "name": "nvme", 00:24:30.243 "trtype": "tcp", 00:24:30.243 "traddr": "10.0.0.2", 00:24:30.243 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:30.243 "adrfam": "ipv4", 00:24:30.243 "trsvcid": "8009", 00:24:30.243 "wait_for_attach": true 00:24:30.243 } 00:24:30.243 } 00:24:30.243 Got JSON-RPC error response 00:24:30.243 GoRPCClient: error on JSON-RPC call 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.243 2024/05/15 02:25:18 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:24:30.243 request: 00:24:30.243 { 00:24:30.243 "method": "bdev_nvme_start_discovery", 00:24:30.243 "params": { 00:24:30.243 "name": "nvme_second", 00:24:30.243 "trtype": "tcp", 00:24:30.243 "traddr": "10.0.0.2", 00:24:30.243 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:30.243 "adrfam": "ipv4", 00:24:30.243 "trsvcid": "8009", 00:24:30.243 "wait_for_attach": true 00:24:30.243 } 00:24:30.243 } 00:24:30.243 Got JSON-RPC error response 00:24:30.243 GoRPCClient: error on JSON-RPC call 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:30.243 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.503 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:30.503 02:25:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:30.503 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:30.503 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:30.503 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:30.503 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:30.503 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:30.503 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:30.503 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:30.503 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.503 02:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.437 [2024-05-15 02:25:19.277206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.437 [2024-05-15 02:25:19.277331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.437 [2024-05-15 02:25:19.277352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe17ef0 with addr=10.0.0.2, port=8010 00:24:31.437 [2024-05-15 02:25:19.277371] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:31.437 [2024-05-15 02:25:19.277382] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:31.437 [2024-05-15 02:25:19.277410] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:32.367 [2024-05-15 02:25:20.277170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.367 [2024-05-15 02:25:20.277281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.367 [2024-05-15 02:25:20.277301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe17ef0 with addr=10.0.0.2, port=8010 00:24:32.367 [2024-05-15 02:25:20.277321] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:32.367 [2024-05-15 02:25:20.277332] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:32.367 [2024-05-15 02:25:20.277341] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:33.302 [2024-05-15 02:25:21.277008] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:33.302 2024/05/15 02:25:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:24:33.302 request: 00:24:33.302 { 00:24:33.302 "method": "bdev_nvme_start_discovery", 00:24:33.302 "params": { 00:24:33.302 "name": "nvme_second", 00:24:33.302 "trtype": "tcp", 00:24:33.302 "traddr": "10.0.0.2", 00:24:33.302 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:33.302 "adrfam": "ipv4", 00:24:33.302 "trsvcid": "8010", 00:24:33.302 "attach_timeout_ms": 3000 00:24:33.302 } 00:24:33.302 } 00:24:33.302 Got JSON-RPC error response 00:24:33.302 GoRPCClient: error on JSON-RPC call 00:24:33.302 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:33.302 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:33.302 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:33.302 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:33.302 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:33.302 02:25:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:33.302 02:25:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:33.302 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.302 02:25:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:33.302 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.302 02:25:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:33.302 02:25:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:33.302 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 83461 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:33.560 rmmod nvme_tcp 00:24:33.560 rmmod nvme_fabrics 00:24:33.560 rmmod nvme_keyring 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 83429 ']' 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 83429 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 83429 ']' 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 83429 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83429 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:33.560 killing process with pid 83429 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83429' 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 83429 00:24:33.560 [2024-05-15 02:25:21.456433] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:33.560 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 83429 00:24:33.817 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:33.817 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:33.817 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:33.817 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:33.817 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:33.817 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.817 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.817 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.817 02:25:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:33.817 00:24:33.817 real 0m9.635s 00:24:33.817 user 0m19.632s 00:24:33.818 sys 0m1.539s 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.818 ************************************ 00:24:33.818 END TEST nvmf_host_discovery 00:24:33.818 ************************************ 00:24:33.818 02:25:21 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:33.818 02:25:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:33.818 02:25:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:33.818 02:25:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:33.818 ************************************ 00:24:33.818 START TEST nvmf_host_multipath_status 00:24:33.818 ************************************ 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:33.818 * Looking for test storage... 00:24:33.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.818 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:34.076 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.076 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.076 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.076 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.076 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.076 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.076 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:34.076 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.076 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:34.076 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:34.076 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:34.076 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.076 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:34.077 Cannot find device "nvmf_tgt_br" 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:34.077 Cannot find device "nvmf_tgt_br2" 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:34.077 Cannot find device "nvmf_tgt_br" 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:34.077 Cannot find device "nvmf_tgt_br2" 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:34.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:34.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:34.077 02:25:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:34.077 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:34.077 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:34.077 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:34.077 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:34.077 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:34.077 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:34.077 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:34.077 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:34.077 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:34.077 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:34.077 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:34.077 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:34.077 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:34.335 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:34.335 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:34.335 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:34.335 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:34.335 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:34.335 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:34.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:24:34.336 00:24:34.336 --- 10.0.0.2 ping statistics --- 00:24:34.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.336 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:34.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:34.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:24:34.336 00:24:34.336 --- 10.0.0.3 ping statistics --- 00:24:34.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.336 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:34.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:24:34.336 00:24:34.336 --- 10.0.0.1 ping statistics --- 00:24:34.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.336 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=83868 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 83868 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 83868 ']' 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:34.336 02:25:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.336 [2024-05-15 02:25:22.274146] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:34.336 [2024-05-15 02:25:22.274246] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.594 [2024-05-15 02:25:22.411277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:34.594 [2024-05-15 02:25:22.479990] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.594 [2024-05-15 02:25:22.480048] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.594 [2024-05-15 02:25:22.480063] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.594 [2024-05-15 02:25:22.480073] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.594 [2024-05-15 02:25:22.480082] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.594 [2024-05-15 02:25:22.480198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.594 [2024-05-15 02:25:22.480204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.538 02:25:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:35.538 02:25:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:24:35.538 02:25:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:35.538 02:25:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:35.538 02:25:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:35.538 02:25:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.538 02:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=83868 00:24:35.538 02:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:35.538 [2024-05-15 02:25:23.553162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.795 02:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:36.053 Malloc0 00:24:36.053 02:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:36.311 02:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:36.570 02:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.570 [2024-05-15 02:25:24.581507] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:36.570 [2024-05-15 02:25:24.581773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.829 02:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:36.829 [2024-05-15 02:25:24.829878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:37.088 02:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=83955 00:24:37.088 02:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:37.088 02:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:37.088 02:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 83955 /var/tmp/bdevperf.sock 00:24:37.088 02:25:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 83955 ']' 00:24:37.088 02:25:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.088 02:25:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:37.088 02:25:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.088 02:25:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:37.088 02:25:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:38.034 02:25:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:38.034 02:25:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:24:38.034 02:25:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:38.305 02:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:38.563 Nvme0n1 00:24:38.563 02:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:39.131 Nvme0n1 00:24:39.131 02:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:39.131 02:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:41.033 02:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:41.034 02:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:41.292 02:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:41.551 02:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:42.926 02:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:42.926 02:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:42.926 02:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.926 02:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:42.926 02:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.926 02:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:42.926 02:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:42.926 02:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.185 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.185 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:43.185 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.185 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:43.443 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.443 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:43.443 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.443 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:43.702 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.702 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:43.702 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.702 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:44.269 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.269 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:44.269 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.269 02:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:44.527 02:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.527 02:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:44.527 02:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:44.786 02:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:45.044 02:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:46.001 02:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:46.001 02:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:46.001 02:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.001 02:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:46.260 02:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:46.260 02:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:46.260 02:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.260 02:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:46.519 02:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.519 02:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:46.519 02:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.519 02:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:47.086 02:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.086 02:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:47.086 02:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.086 02:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:47.344 02:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.344 02:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:47.344 02:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.344 02:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:47.603 02:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.603 02:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:47.603 02:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:47.603 02:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.861 02:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.861 02:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:47.862 02:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:48.120 02:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:48.379 02:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:49.316 02:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:49.316 02:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:49.316 02:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.316 02:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:49.888 02:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.888 02:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:49.888 02:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.888 02:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:50.146 02:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:50.146 02:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:50.146 02:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.146 02:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:50.472 02:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.472 02:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:50.472 02:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.472 02:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:50.731 02:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.731 02:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:50.731 02:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.731 02:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:50.990 02:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.990 02:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:50.990 02:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.990 02:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:51.248 02:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.248 02:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:51.248 02:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:51.507 02:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:51.765 02:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:53.139 02:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:53.139 02:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:53.139 02:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.139 02:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:53.139 02:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.139 02:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:53.139 02:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.139 02:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:53.398 02:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:53.398 02:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:53.398 02:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:53.398 02:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.989 02:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.989 02:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:53.990 02:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.990 02:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:54.263 02:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.263 02:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:54.263 02:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:54.263 02:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.521 02:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.521 02:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:54.521 02:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.521 02:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:54.780 02:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.780 02:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:54.780 02:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:55.038 02:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:55.296 02:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:56.233 02:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:56.233 02:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:56.492 02:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.492 02:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:56.751 02:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:56.751 02:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:56.751 02:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.751 02:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:57.011 02:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:57.011 02:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:57.011 02:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.011 02:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:57.270 02:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.270 02:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:57.270 02:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:57.270 02:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.529 02:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.529 02:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:57.529 02:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:57.529 02:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.787 02:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:57.787 02:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:57.787 02:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.787 02:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:58.088 02:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:58.088 02:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:58.088 02:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:58.346 02:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:58.604 02:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:59.540 02:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:59.540 02:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:59.540 02:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.540 02:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:59.799 02:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.799 02:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:59.799 02:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.799 02:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:00.057 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.057 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:00.057 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.057 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:00.315 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.316 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:00.316 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.316 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:00.882 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.882 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:00.882 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.882 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:00.882 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:00.882 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:00.882 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.882 02:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:01.448 02:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.448 02:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:01.706 02:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:01.706 02:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:01.964 02:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:02.222 02:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:03.237 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:03.237 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:03.237 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.237 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:03.495 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.495 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:03.495 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.495 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:03.755 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.755 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:03.755 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:03.755 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.014 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.014 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:04.014 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.014 02:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:04.273 02:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.273 02:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:04.273 02:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.273 02:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:04.566 02:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.566 02:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:04.566 02:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.566 02:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:04.846 02:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.846 02:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:04.846 02:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:05.104 02:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:05.362 02:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:06.735 02:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:06.735 02:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:06.735 02:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.735 02:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:06.735 02:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.735 02:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:06.735 02:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.735 02:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:06.994 02:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.994 02:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:06.994 02:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:06.994 02:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.252 02:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.252 02:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:07.252 02:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.252 02:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:07.818 02:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.818 02:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:07.818 02:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.818 02:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:08.076 02:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.076 02:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:08.076 02:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.076 02:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:08.076 02:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.076 02:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:08.076 02:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:08.642 02:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:08.901 02:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:09.833 02:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:09.833 02:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:09.833 02:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.833 02:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:10.399 02:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.399 02:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:10.399 02:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.399 02:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:10.658 02:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.658 02:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:10.658 02:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.658 02:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:11.224 02:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.224 02:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:11.224 02:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.224 02:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:11.224 02:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.224 02:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:11.224 02:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.224 02:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:11.789 02:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.789 02:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:11.789 02:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:11.789 02:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.047 02:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.047 02:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:12.047 02:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:12.305 02:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:12.563 02:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:13.936 02:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:13.936 02:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:13.936 02:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.936 02:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:13.936 02:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.936 02:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:13.936 02:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.936 02:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:14.194 02:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:14.194 02:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:14.194 02:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.194 02:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:14.761 02:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.761 02:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:14.761 02:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.761 02:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:15.020 02:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.020 02:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:15.020 02:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.020 02:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:15.278 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.278 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:15.278 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:15.278 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.536 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:15.536 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 83955 00:25:15.536 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 83955 ']' 00:25:15.537 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 83955 00:25:15.537 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:15.537 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:15.537 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83955 00:25:15.537 killing process with pid 83955 00:25:15.537 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:25:15.537 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:25:15.537 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83955' 00:25:15.537 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 83955 00:25:15.537 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 83955 00:25:15.537 Connection closed with partial response: 00:25:15.537 00:25:15.537 00:25:15.816 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 83955 00:25:15.816 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:15.816 [2024-05-15 02:25:24.892677] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:25:15.816 [2024-05-15 02:25:24.892792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83955 ] 00:25:15.816 [2024-05-15 02:25:25.029202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.816 [2024-05-15 02:25:25.098773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.816 Running I/O for 90 seconds... 00:25:15.816 [2024-05-15 02:25:42.939918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.816 [2024-05-15 02:25:42.940001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.940970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.940985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.941007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.816 [2024-05-15 02:25:42.941022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.816 [2024-05-15 02:25:42.941044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.941060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.941082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.941097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.941119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.941134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.941156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.941173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.941195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.941210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.941232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.941247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.941269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.941285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.941308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.941323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.941810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.941839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.941878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.941896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.941919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.941934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.941964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.941980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.817 [2024-05-15 02:25:42.942687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.817 [2024-05-15 02:25:42.942725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.817 [2024-05-15 02:25:42.942762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.817 [2024-05-15 02:25:42.942799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.817 [2024-05-15 02:25:42.942843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.817 [2024-05-15 02:25:42.942883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.817 [2024-05-15 02:25:42.942921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.817 [2024-05-15 02:25:42.942958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.942980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.817 [2024-05-15 02:25:42.942995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.943027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.817 [2024-05-15 02:25:42.943043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.817 [2024-05-15 02:25:42.945094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.818 [2024-05-15 02:25:42.945118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.945142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.818 [2024-05-15 02:25:42.945158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.945179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.818 [2024-05-15 02:25:42.945195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.945217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.818 [2024-05-15 02:25:42.945231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.945253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.818 [2024-05-15 02:25:42.945268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.945290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.818 [2024-05-15 02:25:42.945305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.945328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.818 [2024-05-15 02:25:42.945342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.818 [2024-05-15 02:25:42.946364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.818 [2024-05-15 02:25:42.946425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.946464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.946501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.946540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.946577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.946615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.946652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.946690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.946728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.946765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.946803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.946853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.946891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.946928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.946965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.946987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.818 [2024-05-15 02:25:42.947620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.818 [2024-05-15 02:25:42.947643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.947658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.948973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.948988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.819 [2024-05-15 02:25:42.949623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.819 [2024-05-15 02:25:42.949638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.949660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.949684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.949707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.949722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.949745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.949759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.949781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.949796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.949818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.949833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.949855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.949870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.949892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.949907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.949929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.949944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.949965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.949980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.950581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.950596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.951465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.951493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.951521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.951539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.951562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.951577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.951599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.951614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.951636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.820 [2024-05-15 02:25:42.951651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.820 [2024-05-15 02:25:42.951674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.951688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.951710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.951725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.951748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.951763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.951794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.951809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.951831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.951846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.951868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.951883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.951904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.951919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.951956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.951973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.951995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.952010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.821 [2024-05-15 02:25:42.952711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.952748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.952786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.952825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.952863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.952907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.952945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.952976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.952991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.953013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.953028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.953050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.953065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.953087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.953102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.953124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.953139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.953161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.953176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.953198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.821 [2024-05-15 02:25:42.953213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.821 [2024-05-15 02:25:42.953235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.953917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.953932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.954652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.954679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.954706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.954723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.954746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.954761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.954783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.954799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.954821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.954836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.954858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.954873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.954895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.954909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.954931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.954947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.954969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.954984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.955006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.955021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.955043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.955068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.955092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.955107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.955129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.955144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.955166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.955181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.955203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.955218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.955240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.955255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.955277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.955292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.955314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.955329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.955351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.955366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.955401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.955420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.955442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.955458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.955479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.822 [2024-05-15 02:25:42.955495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.822 [2024-05-15 02:25:42.955516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.955539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.955562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.955578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.955600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.955616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.955638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.955653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.955675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.955690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.955712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.955727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.955749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.955764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.955786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.955802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.955824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.955839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.955861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.955876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.955898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.955913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.955935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.955951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.955973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.955988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.956968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.956991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.957007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.957040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.957058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.823 [2024-05-15 02:25:42.957080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.823 [2024-05-15 02:25:42.957096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.957120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.957136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.957947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.957976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.958020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.958057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.958095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.958133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.958170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.958208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.958255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.958329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.958366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.958421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.958459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.958496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.958537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.958575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.958614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.958651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.958688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.958725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.958762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.958810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.958848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.958884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.958921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.958958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.958980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.958995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.959018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.959033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.959055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.959070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.959092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.959107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.959129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.959145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.959167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.959182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.959205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.959220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.959249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.824 [2024-05-15 02:25:42.959265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.959287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.959302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.959325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.959340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.959362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.959377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.959412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.959428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.824 [2024-05-15 02:25:42.959451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.824 [2024-05-15 02:25:42.959466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.959494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.959509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.959531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.959546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.959568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.959583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.959605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.959620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.959642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.959658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.959680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.959695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.959717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.959739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.959762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.959778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.959800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.959815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.959837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.959853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.959875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.959890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.959912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.959927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.959949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.959964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.959986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.960001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.960022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.960038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.960060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.960075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.968872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.968946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.968989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.969012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.969045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.969100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.969126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.969142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.969165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.969181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.969204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.969219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.969242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.969256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.969279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.969295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.969319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.969334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.970297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.970329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.970360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.970377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.970417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.970434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.970456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.970472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.970494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.970510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.970532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.970548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.970587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.970604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.970627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.970651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.970673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.825 [2024-05-15 02:25:42.970688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.825 [2024-05-15 02:25:42.970711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.970726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.970748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.970763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.970786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.970801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.970824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.970839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.970862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.970877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.970900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.970915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.970938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.970954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.970977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.970992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.971968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.971983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.972005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.972027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.972051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.972066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.972088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.972103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.972126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.972141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.972163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.972178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.972201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.972217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.972240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.826 [2024-05-15 02:25:42.972255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.826 [2024-05-15 02:25:42.972277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.972292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.972315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.972330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.972352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.972367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.972400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.972419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.972441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.972457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.972480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.972495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.972525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.972541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.972563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.972578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.972601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.972616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.972638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.972653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.972675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.972690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.972713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.972729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.972752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.972767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.972789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.972804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.972827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.972843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.973690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.973718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.973745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.973762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.973786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.973801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.973835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.973851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.973873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.973889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.973911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.973926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.973948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.973963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.973985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.974000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.974037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.974074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.974111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.974148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.974185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.974222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.974259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.827 [2024-05-15 02:25:42.974305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.827 [2024-05-15 02:25:42.974344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.827 [2024-05-15 02:25:42.974382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.827 [2024-05-15 02:25:42.974437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.827 [2024-05-15 02:25:42.974474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.827 [2024-05-15 02:25:42.974512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.827 [2024-05-15 02:25:42.974550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.827 [2024-05-15 02:25:42.974587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.827 [2024-05-15 02:25:42.974634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.827 [2024-05-15 02:25:42.974672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.827 [2024-05-15 02:25:42.974694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.828 [2024-05-15 02:25:42.974710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.974732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.828 [2024-05-15 02:25:42.974747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.974769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.828 [2024-05-15 02:25:42.974792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.974816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.828 [2024-05-15 02:25:42.974832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.974854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.828 [2024-05-15 02:25:42.974869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.974891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.828 [2024-05-15 02:25:42.974906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.974929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.828 [2024-05-15 02:25:42.974944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.974967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.828 [2024-05-15 02:25:42.974982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.828 [2024-05-15 02:25:42.975019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.975965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.975987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.976002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.976025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.976042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.976066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.976080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.976103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.976118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.976141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.976156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.976955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.976984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.977013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.977041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.828 [2024-05-15 02:25:42.977065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.828 [2024-05-15 02:25:42.977081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.977983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.977998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.978029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.978045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.978078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.978094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.978116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.978132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.978155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.978170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.978192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.978207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.978229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.978244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.978266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.978281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.978304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.978319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.978341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.978356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.978379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.829 [2024-05-15 02:25:42.978409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.829 [2024-05-15 02:25:42.978434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.978449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.978471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.978486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.978517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.978533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.978556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.978571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.978593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.978608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.978631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.978646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.978669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.978684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.978706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.978721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.978744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.978760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.978782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.978808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.978831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.978846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.978868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.978883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.978906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.978921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.978943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.978958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.978980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.979002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.979025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.979040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.979062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.979077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.979100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.979115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.979137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.979152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.979174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.979190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.979212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.979227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.979249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.979264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.979286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.979301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.979323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.979339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.979361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.979377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.979412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.979429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.979451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.979474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.979498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.979513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.980345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.980374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.980419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.980438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.980461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.980476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.980499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.980514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.980537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.980552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.980574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.980589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.980611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.980627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.980649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.980664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.980686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.980701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.980724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.980739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.980761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.980776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.980810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.830 [2024-05-15 02:25:42.980826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.830 [2024-05-15 02:25:42.980853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.980869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.980891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.980906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.980929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.980944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.980967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.980982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.981019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.831 [2024-05-15 02:25:42.981738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.981789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.981828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.981865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.981902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.981939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.981977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.981999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.982013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.982036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.982051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.982073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.982088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.982110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.982126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.982148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.991062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.991177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.991202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.991232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.991273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.991303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.991322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.991349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.991367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.991411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.991433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.831 [2024-05-15 02:25:42.991461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.831 [2024-05-15 02:25:42.991480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.991508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.991527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.991554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.991572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.991599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.991617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.991644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.991662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.991689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.991707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.991734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.991752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.991779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.991797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.991827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.991845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.991883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.991902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.991929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.991947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.991975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.991993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.993958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.993984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.994002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.994030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.994053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.994081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.994109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.994137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.994156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.994183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.994201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.994229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.994246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.994273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.994291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.994319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.994337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.994364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.994382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.994424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.832 [2024-05-15 02:25:42.994444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.832 [2024-05-15 02:25:42.994472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.994490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.994517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.994535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.994563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.994581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.994608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.994626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.994652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.994675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.994707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.994725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.994752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.994770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.994797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.994815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.994842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.994860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.994886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.994904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.994931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.994950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.994977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.994994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.995978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.995996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.996023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.996041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.833 [2024-05-15 02:25:42.996069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.833 [2024-05-15 02:25:42.996087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.997956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.997974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.834 [2024-05-15 02:25:42.998846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.998891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.998935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.998962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.998981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.834 [2024-05-15 02:25:42.999008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.834 [2024-05-15 02:25:42.999026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:42.999971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:42.999998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.000016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.000043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.000061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.000089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.000107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.835 [2024-05-15 02:25:43.001845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.835 [2024-05-15 02:25:43.001863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.001902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.001920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.001947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.001965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.001992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.002975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.002998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.003013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.003036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.003051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.003073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.003088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.003110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.003125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.003148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.003163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.003185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.003200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.003223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.003238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.003260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.003275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.003298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.003313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.003336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.003354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.003381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.003409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.003433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.003448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.836 [2024-05-15 02:25:43.003471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.836 [2024-05-15 02:25:43.003486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.003508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.003524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.003546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.003562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.003584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.003600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.003622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.003637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.003659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.003675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.003697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.003722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.003745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.003761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.837 [2024-05-15 02:25:43.004903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.837 [2024-05-15 02:25:43.004945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.004972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.837 [2024-05-15 02:25:43.004987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.005014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.837 [2024-05-15 02:25:43.005030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.005057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.837 [2024-05-15 02:25:43.005072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.005100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.837 [2024-05-15 02:25:43.005115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.005142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.837 [2024-05-15 02:25:43.005157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.005184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.837 [2024-05-15 02:25:43.005199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.005226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.837 [2024-05-15 02:25:43.005248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.005276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.837 [2024-05-15 02:25:43.005292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.005318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.837 [2024-05-15 02:25:43.005334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.005360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.837 [2024-05-15 02:25:43.005375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.005416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.837 [2024-05-15 02:25:43.005433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.005459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.837 [2024-05-15 02:25:43.005475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.837 [2024-05-15 02:25:43.005501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.837 [2024-05-15 02:25:43.005517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.005544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.838 [2024-05-15 02:25:43.005570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.005600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.838 [2024-05-15 02:25:43.005616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.005643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.838 [2024-05-15 02:25:43.005659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.005686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.838 [2024-05-15 02:25:43.005702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.005728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.005744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.005771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.005798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.005827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.005843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.005869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.005885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.005912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.005927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.005953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.005969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.005996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.006814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.006829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:25:43.007023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:25:43.007047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:26:00.518428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:26:00.518525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:26:00.518582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:26:00.518613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:26:00.518653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.838 [2024-05-15 02:26:00.518680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.838 [2024-05-15 02:26:00.518715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.518741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.518775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.518801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.518836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.518862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.518896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.518921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.518955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.518981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.519015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.519039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.519074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.519099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.519132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.519157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.519192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.519255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.519293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-05-15 02:26:00.519319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.519354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-05-15 02:26:00.519380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.519437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-05-15 02:26:00.519463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.519498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-05-15 02:26:00.519524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.519562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-05-15 02:26:00.519591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.519630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-05-15 02:26:00.519659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.519701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-05-15 02:26:00.519731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.519776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.519806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.519847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.519878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.519914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.519943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.519980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.520008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.520044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.520087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.520125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.520152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.520188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.520215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.520251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.520277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.520309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.520334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.520374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.520426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.520461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.520484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.520521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.520550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.520585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.520610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.520644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-05-15 02:26:00.520670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.520704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-05-15 02:26:00.520734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.523424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-05-15 02:26:00.523468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.523506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-05-15 02:26:00.523536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.523607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-05-15 02:26:00.523638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.523679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-05-15 02:26:00.523707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.523748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-05-15 02:26:00.523776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.523814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-05-15 02:26:00.523844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.523884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.523911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.523951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.523979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.524017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.839 [2024-05-15 02:26:00.524044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.839 [2024-05-15 02:26:00.524082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.524111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.524160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.524187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.524222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.524249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.524281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.524306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.524340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.524406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.524450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-05-15 02:26:00.524468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.525903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.525971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.526054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.526118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.526158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.526195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.526232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.526269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.526306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.526345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.526397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.526439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-05-15 02:26:00.526494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-05-15 02:26:00.526534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-05-15 02:26:00.526571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-05-15 02:26:00.526625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-05-15 02:26:00.526664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-05-15 02:26:00.526702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.526739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.526777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.526815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.526852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.526890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-05-15 02:26:00.526927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.526949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-05-15 02:26:00.526965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.527019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-05-15 02:26:00.527048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.527084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-05-15 02:26:00.527113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.527148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.527176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.527211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.527238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.527274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-05-15 02:26:00.527302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.527340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-05-15 02:26:00.527365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.527436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-05-15 02:26:00.527467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.527503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.527529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.527565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.527590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.527628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.527656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.840 [2024-05-15 02:26:00.527699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.840 [2024-05-15 02:26:00.527725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.527761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.527787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.527839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.527868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.527904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.527932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.527968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.527995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.528031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.528057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.528094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.528120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.528155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.528183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.528218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.528245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.528282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.528312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.528350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.528379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.528441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.528467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.528505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.528536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.528576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.528602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.528638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.528686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.528726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.528758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.528799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.528828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.528868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.528897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.528936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.528965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.529007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.529037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.529074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.529102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.529141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.529166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.529201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.529228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.529266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.529293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.529335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.529365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.531861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.531917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.531969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.532021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.532059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.532084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.532119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.532145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.532180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.532215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.532250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.532275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.532310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.532337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.532377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.532426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.532461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.532485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.532519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.532543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.532576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.532602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.532636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-05-15 02:26:00.532660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.532693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.532719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.532755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.532781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.532834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.532864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.841 [2024-05-15 02:26:00.532900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.841 [2024-05-15 02:26:00.532929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.532967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.532996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.533034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.533063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.533100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.533129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.533166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.533193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.533231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.533259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.533297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.533326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.533363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.533418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.533460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.533489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.533528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.533555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.533593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.533641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.533698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.533730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.533768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.533796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.533835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.533864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.535457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.535503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.535547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.535573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.535608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.535634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.535666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.535691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.535723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.535750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.535787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.535815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.535854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.535882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.535919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.535947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.535985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.536013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.536050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.536099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.536138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.536167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.536205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.536233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.536270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.536299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.536336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.536364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.536423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.536454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.536493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.536522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.536560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.536588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.536627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.536655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.536692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.536721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.536758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.842 [2024-05-15 02:26:00.536787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.842 [2024-05-15 02:26:00.536823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-05-15 02:26:00.536848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.536884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.536936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.536975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.537001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.537037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.537064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.537099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.537127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.537164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.537193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.537229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.537258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.537295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.537324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.537361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.537409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.537451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.537479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.537517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.537545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.537582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.537629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.537670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.537699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.537736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.537764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.537821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.537851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.537888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.537917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.537954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.537990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.538024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.538048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.540138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.540188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.540237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.540268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.540307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.540336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.540374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.540427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.540467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.540496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.540534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.540564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.540602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.540631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.540670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.540698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.540755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.540786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.540823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.540852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.540890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.540919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.540955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.540984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.541021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.541050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.541087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.541116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.541152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.541180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.541217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.541247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.541286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.541315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.541351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.541377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.541436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.541463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.541496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.541519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.541551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.541592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.541643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-05-15 02:26:00.541667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.843 [2024-05-15 02:26:00.541698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.843 [2024-05-15 02:26:00.541722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.541757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.541785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.541821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.541848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.541884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.541913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.541950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.541977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.542015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.542049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.542087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.542115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.543718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.543770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.543823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.543853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.543892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.543921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.543958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.544006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.544049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.544079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.544117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.544145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.544182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.544211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.544249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.544278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.544315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.544345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.544402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.544435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.544473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.544502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.544538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.544567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.544605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.544633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.544669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.544696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.544734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.544762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.544799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.544828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.544883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.544912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.544950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.544979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.545016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.545044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.545081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.545110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.545147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.545175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.545213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.545241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.545279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.545307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.545343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.545371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.545430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.545469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.545507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.545535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.547552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.547600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.547651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-05-15 02:26:00.547683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.547743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.547774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.547812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.547841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.547878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.547906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.547943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.547970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.548007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.844 [2024-05-15 02:26:00.548036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.844 [2024-05-15 02:26:00.548073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.548101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.548138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.548165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.548203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.548230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.548267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.548295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.548331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.548360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.548419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.548449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.548482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.548506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.548539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.548579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.548615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.548639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.548670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.548694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.548728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.548756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.548793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.548820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.548858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.548887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.548923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.548951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.548988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.549015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.549053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.549083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.549930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.549978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.550030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.550061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.550102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.550130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.550171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.550199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.550256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.550286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.550324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.550352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.550409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.550441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.550480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.550510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.550549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.550587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.550625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.550656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.551173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.551217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.551264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.551294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.551333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.551357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.551410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.551437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.551469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.551492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.551523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.551548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.551605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.551635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.551673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.551702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.551740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.551767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.551806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.551834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.551871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.551900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.551939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.551970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.552006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.845 [2024-05-15 02:26:00.552034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.552073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.552099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.845 [2024-05-15 02:26:00.552833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-05-15 02:26:00.552880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.552927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.552958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.552998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.553027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.553065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.553093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.553132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.553181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.553221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.846 [2024-05-15 02:26:00.553250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.553288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.846 [2024-05-15 02:26:00.553316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.553354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.846 [2024-05-15 02:26:00.553401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.553445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.553474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.553510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.553539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.553576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.553618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.553660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.553689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.553727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.846 [2024-05-15 02:26:00.553755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.553792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.553820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.553857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.553886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.553923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.846 [2024-05-15 02:26:00.553952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.553990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.554038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.558782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.558848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.558904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.558936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.558974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.559002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.559040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.559068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.559105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.559133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.559171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.559200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.559238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.559267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.559304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.846 [2024-05-15 02:26:00.559331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.559369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.846 [2024-05-15 02:26:00.559423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.559463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.559491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.559529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.559557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.559594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.559625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.559685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.846 [2024-05-15 02:26:00.559716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.559753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.559781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.559819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.559846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.559883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.559912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.559948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-05-15 02:26:00.559977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.846 [2024-05-15 02:26:00.560025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.560052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.560086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.560108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.560140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.560163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.560195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.560217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.560248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.560270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.560306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.560331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.560369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.560418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.561460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.561509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.561556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.561587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.561648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.561679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.561717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.561747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.561784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.561812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.561849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.561877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.561915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.561943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.561980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.562008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.562045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.562074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.562112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.562141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.562177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.562205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.562242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.562270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.562307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.562355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.562414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.562446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.562484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.562513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.562550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.562578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.562617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.562647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.562685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.562713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.562758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.562787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.562823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.562853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.562889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.562914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.562946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.562970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.563606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-05-15 02:26:00.563652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.563699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-05-15 02:26:00.563732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.563771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-05-15 02:26:00.563825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.563867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-05-15 02:26:00.563897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.563936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-05-15 02:26:00.563965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.564002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-05-15 02:26:00.564028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.564061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-05-15 02:26:00.564085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.564117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.564141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.564172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-05-15 02:26:00.564196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.564227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-05-15 02:26:00.564251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.564283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-05-15 02:26:00.564306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.564338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-05-15 02:26:00.564361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.564414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.847 [2024-05-15 02:26:00.564442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.847 [2024-05-15 02:26:00.564474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.564497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.564529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.564553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.564603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.564631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.565416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.565461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.565505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.565534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.565570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.565595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.565646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.565674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.565714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.565746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.565785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.565815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.565854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.565881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.565917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.565943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.565981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.566018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.566052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.566079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.566115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.566144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.566200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.566230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.566266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.566295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.566335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.566365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.566420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.566445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.566477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.566502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.566538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.566563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.566600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.566628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.566662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.566687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.566721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.566748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.566787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.566816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.566853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.566880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.566917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.566942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.570569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.570648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.570698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.570727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.570763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.570788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.570821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.570845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.570876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.570899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.570931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.570953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.570990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.571022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.571059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.571091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.571130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.571159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.571198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.571224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.571260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.571286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.571323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.571353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.571414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.571445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.571503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.848 [2024-05-15 02:26:00.571531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.571566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.571593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.571629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.571656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.571691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.571716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.848 [2024-05-15 02:26:00.571751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.848 [2024-05-15 02:26:00.571778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.571813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.571838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.571872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.571899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.571932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.571958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.571993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.572018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.572052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.572077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.572111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.572146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.572180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.572204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.572254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.572280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.572315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.572340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.572376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.572423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.574007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.574054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.574121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.574154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.574191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.574218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.574255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.849 [2024-05-15 02:26:00.574281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.574316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.849 [2024-05-15 02:26:00.574342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.574378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.849 [2024-05-15 02:26:00.574425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.574462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.849 [2024-05-15 02:26:00.574488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.574523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.849 [2024-05-15 02:26:00.574549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.574584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.849 [2024-05-15 02:26:00.574613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.574650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.849 [2024-05-15 02:26:00.574696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.577356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.849 [2024-05-15 02:26:00.577434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.577484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.849 [2024-05-15 02:26:00.577513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.577549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.849 [2024-05-15 02:26:00.577576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.577628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.849 [2024-05-15 02:26:00.577659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.577694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.849 [2024-05-15 02:26:00.577720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.577755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.849 [2024-05-15 02:26:00.577781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.577817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.849 [2024-05-15 02:26:00.577843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.577878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.849 [2024-05-15 02:26:00.577904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.577949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.577976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.578010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.578037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.578072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.578097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.578131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.578176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.578211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.578234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.849 [2024-05-15 02:26:00.578266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.849 [2024-05-15 02:26:00.578289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.849 Received shutdown signal, test time was about 36.253118 seconds 00:25:15.849 00:25:15.849 Latency(us) 00:25:15.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.849 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:15.849 Verification LBA range: start 0x0 length 0x4000 00:25:15.849 Nvme0n1 : 36.25 8349.68 32.62 0.00 0.00 15297.47 174.08 4087539.90 00:25:15.849 =================================================================================================================== 00:25:15.849 Total : 8349.68 32.62 0.00 0.00 15297.47 174.08 4087539.90 00:25:15.849 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:16.108 rmmod nvme_tcp 00:25:16.108 rmmod nvme_fabrics 00:25:16.108 rmmod nvme_keyring 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 83868 ']' 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 83868 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 83868 ']' 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 83868 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83868 00:25:16.108 killing process with pid 83868 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83868' 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 83868 00:25:16.108 [2024-05-15 02:26:03.961597] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:16.108 02:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 83868 00:25:16.367 02:26:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:16.367 02:26:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:16.367 02:26:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:16.367 02:26:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:16.367 02:26:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:16.367 02:26:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.367 02:26:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.367 02:26:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.367 02:26:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:16.367 00:25:16.367 real 0m42.477s 00:25:16.367 user 2m20.546s 00:25:16.367 sys 0m10.064s 00:25:16.367 02:26:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:16.367 02:26:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:16.367 ************************************ 00:25:16.367 END TEST nvmf_host_multipath_status 00:25:16.367 ************************************ 00:25:16.367 02:26:04 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:16.367 02:26:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:16.367 02:26:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:16.367 02:26:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:16.367 ************************************ 00:25:16.367 START TEST nvmf_discovery_remove_ifc 00:25:16.367 ************************************ 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:16.367 * Looking for test storage... 00:25:16.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:16.367 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:16.368 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:16.626 Cannot find device "nvmf_tgt_br" 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:16.626 Cannot find device "nvmf_tgt_br2" 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:16.626 Cannot find device "nvmf_tgt_br" 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:16.626 Cannot find device "nvmf_tgt_br2" 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:16.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:16.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:16.626 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:16.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:25:16.885 00:25:16.885 --- 10.0.0.2 ping statistics --- 00:25:16.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.885 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:16.885 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:16.885 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:25:16.885 00:25:16.885 --- 10.0.0.3 ping statistics --- 00:25:16.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.885 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:16.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:25:16.885 00:25:16.885 --- 10.0.0.1 ping statistics --- 00:25:16.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.885 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=85048 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 85048 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 85048 ']' 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:16.885 02:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.885 [2024-05-15 02:26:04.767075] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:25:16.885 [2024-05-15 02:26:04.767191] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.143 [2024-05-15 02:26:04.909728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.143 [2024-05-15 02:26:04.969226] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.143 [2024-05-15 02:26:04.969281] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.143 [2024-05-15 02:26:04.969292] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.143 [2024-05-15 02:26:04.969301] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.143 [2024-05-15 02:26:04.969308] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.143 [2024-05-15 02:26:04.969338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.078 [2024-05-15 02:26:05.834257] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.078 [2024-05-15 02:26:05.842192] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:18.078 [2024-05-15 02:26:05.842449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:18.078 null0 00:25:18.078 [2024-05-15 02:26:05.874359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=85092 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 85092 /tmp/host.sock 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 85092 ']' 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:18.078 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:18.078 02:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.078 [2024-05-15 02:26:05.962671] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:25:18.078 [2024-05-15 02:26:05.962808] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85092 ] 00:25:18.336 [2024-05-15 02:26:06.103084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.336 [2024-05-15 02:26:06.164247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.336 02:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:18.336 02:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:25:18.336 02:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:18.336 02:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:18.336 02:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.336 02:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.336 02:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.336 02:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:18.336 02:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.336 02:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.336 02:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.336 02:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:18.336 02:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.336 02:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.269 [2024-05-15 02:26:07.277918] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:19.269 [2024-05-15 02:26:07.277970] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:19.269 [2024-05-15 02:26:07.277993] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:19.527 [2024-05-15 02:26:07.364083] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:19.527 [2024-05-15 02:26:07.420582] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:19.527 [2024-05-15 02:26:07.420670] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:19.527 [2024-05-15 02:26:07.420702] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:19.527 [2024-05-15 02:26:07.420721] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:19.527 [2024-05-15 02:26:07.420749] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.527 [2024-05-15 02:26:07.426195] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1a10820 was disconnected and freed. delete nvme_qpair. 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.527 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.785 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:19.785 02:26:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:20.716 02:26:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:20.716 02:26:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.716 02:26:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.716 02:26:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:20.716 02:26:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:20.716 02:26:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:20.716 02:26:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:20.716 02:26:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.716 02:26:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:20.716 02:26:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:21.651 02:26:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:21.651 02:26:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.651 02:26:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:21.651 02:26:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.651 02:26:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.651 02:26:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:21.651 02:26:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:21.651 02:26:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.909 02:26:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:21.909 02:26:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:22.842 02:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:22.842 02:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.842 02:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:22.842 02:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.842 02:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.842 02:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:22.842 02:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:22.842 02:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.842 02:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:22.842 02:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:23.775 02:26:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:23.775 02:26:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.775 02:26:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.775 02:26:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.775 02:26:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:23.775 02:26:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:23.775 02:26:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.033 02:26:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.033 02:26:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:24.033 02:26:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:25.004 02:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.004 02:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.004 02:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.004 02:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.004 02:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.004 02:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.004 02:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.004 [2024-05-15 02:26:12.848247] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:25.004 [2024-05-15 02:26:12.848338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.004 [2024-05-15 02:26:12.848355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.004 [2024-05-15 02:26:12.848369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.004 [2024-05-15 02:26:12.848379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.004 [2024-05-15 02:26:12.848402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.004 [2024-05-15 02:26:12.848413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.004 [2024-05-15 02:26:12.848423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.004 [2024-05-15 02:26:12.848432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.004 [2024-05-15 02:26:12.848443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.004 [2024-05-15 02:26:12.848452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.004 [2024-05-15 02:26:12.848462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19da490 is same with the state(5) to be set 00:25:25.004 [2024-05-15 02:26:12.858247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19da490 (9): Bad file descriptor 00:25:25.004 02:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.004 [2024-05-15 02:26:12.868273] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:25.004 02:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:25.004 02:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:25.939 [2024-05-15 02:26:13.880454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:25.939 02:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.939 02:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.939 02:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.939 02:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.939 02:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.939 02:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.939 02:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:27.313 [2024-05-15 02:26:14.904446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:27.313 [2024-05-15 02:26:14.904558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19da490 with addr=10.0.0.2, port=4420 00:25:27.313 [2024-05-15 02:26:14.904584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19da490 is same with the state(5) to be set 00:25:27.313 [2024-05-15 02:26:14.905336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19da490 (9): Bad file descriptor 00:25:27.313 [2024-05-15 02:26:14.905416] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.313 [2024-05-15 02:26:14.905466] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:27.313 [2024-05-15 02:26:14.905522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.313 [2024-05-15 02:26:14.905543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.313 [2024-05-15 02:26:14.905563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.313 [2024-05-15 02:26:14.905578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.313 [2024-05-15 02:26:14.905593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.313 [2024-05-15 02:26:14.905607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.313 [2024-05-15 02:26:14.905623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.313 [2024-05-15 02:26:14.905655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.313 [2024-05-15 02:26:14.905672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.313 [2024-05-15 02:26:14.905686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.313 [2024-05-15 02:26:14.905700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:27.313 [2024-05-15 02:26:14.905744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1979280 (9): Bad file descriptor 00:25:27.313 [2024-05-15 02:26:14.906747] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:27.313 [2024-05-15 02:26:14.906790] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:27.313 02:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.313 02:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:27.313 02:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:28.249 02:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:28.249 02:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:28.249 02:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.249 02:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:28.249 02:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.249 02:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:28.249 02:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:28.249 02:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.249 02:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:28.249 02:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:28.249 02:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:28.249 02:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:28.249 02:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:28.249 02:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:28.249 02:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.249 02:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:28.249 02:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:28.249 02:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.249 02:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:28.249 02:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.249 02:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:28.249 02:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:29.282 [2024-05-15 02:26:16.920012] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:29.282 [2024-05-15 02:26:16.920059] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:29.282 [2024-05-15 02:26:16.920081] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:29.282 [2024-05-15 02:26:17.006184] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:29.282 [2024-05-15 02:26:17.061347] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:29.282 [2024-05-15 02:26:17.061430] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:29.282 [2024-05-15 02:26:17.061456] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:29.282 [2024-05-15 02:26:17.061474] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:29.282 [2024-05-15 02:26:17.061485] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:29.282 [2024-05-15 02:26:17.067990] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19f1130 was disconnected and freed. delete nvme_qpair. 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 85092 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 85092 ']' 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 85092 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85092 00:25:29.282 killing process with pid 85092 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85092' 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 85092 00:25:29.282 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 85092 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:29.566 rmmod nvme_tcp 00:25:29.566 rmmod nvme_fabrics 00:25:29.566 rmmod nvme_keyring 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 85048 ']' 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 85048 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 85048 ']' 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 85048 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85048 00:25:29.566 killing process with pid 85048 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85048' 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 85048 00:25:29.566 [2024-05-15 02:26:17.490922] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:29.566 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 85048 00:25:29.825 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:29.825 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:29.825 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:29.825 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.825 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:29.825 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.825 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.825 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.825 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:29.825 00:25:29.825 real 0m13.456s 00:25:29.825 user 0m22.862s 00:25:29.825 sys 0m1.453s 00:25:29.825 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:29.825 02:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:29.825 ************************************ 00:25:29.825 END TEST nvmf_discovery_remove_ifc 00:25:29.825 ************************************ 00:25:29.825 02:26:17 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:29.826 02:26:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:29.826 02:26:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:29.826 02:26:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.826 ************************************ 00:25:29.826 START TEST nvmf_identify_kernel_target 00:25:29.826 ************************************ 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:29.826 * Looking for test storage... 00:25:29.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.826 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:30.133 Cannot find device "nvmf_tgt_br" 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:30.133 Cannot find device "nvmf_tgt_br2" 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:30.133 Cannot find device "nvmf_tgt_br" 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:30.133 Cannot find device "nvmf_tgt_br2" 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:30.133 02:26:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:30.133 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:30.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:30.133 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:25:30.133 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:30.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:30.133 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:25:30.133 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:30.133 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:30.133 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:30.133 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:30.133 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:30.133 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:30.133 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:30.133 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:30.133 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:30.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:25:30.414 00:25:30.414 --- 10.0.0.2 ping statistics --- 00:25:30.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.414 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:30.414 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:30.414 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:25:30.414 00:25:30.414 --- 10.0.0.3 ping statistics --- 00:25:30.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.414 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:25:30.414 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:30.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:25:30.415 00:25:30.415 --- 10.0.0.1 ping statistics --- 00:25:30.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.415 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:30.415 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:30.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:30.743 Waiting for block devices as requested 00:25:30.743 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:31.030 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:31.030 No valid GPT data, bailing 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:31.030 No valid GPT data, bailing 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:31.030 No valid GPT data, bailing 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:25:31.030 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:31.031 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:31.031 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:25:31.031 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:31.031 02:26:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:31.290 No valid GPT data, bailing 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -a 10.0.0.1 -t tcp -s 4420 00:25:31.290 00:25:31.290 Discovery Log Number of Records 2, Generation counter 2 00:25:31.290 =====Discovery Log Entry 0====== 00:25:31.290 trtype: tcp 00:25:31.290 adrfam: ipv4 00:25:31.290 subtype: current discovery subsystem 00:25:31.290 treq: not specified, sq flow control disable supported 00:25:31.290 portid: 1 00:25:31.290 trsvcid: 4420 00:25:31.290 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:31.290 traddr: 10.0.0.1 00:25:31.290 eflags: none 00:25:31.290 sectype: none 00:25:31.290 =====Discovery Log Entry 1====== 00:25:31.290 trtype: tcp 00:25:31.290 adrfam: ipv4 00:25:31.290 subtype: nvme subsystem 00:25:31.290 treq: not specified, sq flow control disable supported 00:25:31.290 portid: 1 00:25:31.290 trsvcid: 4420 00:25:31.290 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:31.290 traddr: 10.0.0.1 00:25:31.290 eflags: none 00:25:31.290 sectype: none 00:25:31.290 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:31.290 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:31.290 ===================================================== 00:25:31.290 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:31.290 ===================================================== 00:25:31.290 Controller Capabilities/Features 00:25:31.290 ================================ 00:25:31.290 Vendor ID: 0000 00:25:31.290 Subsystem Vendor ID: 0000 00:25:31.290 Serial Number: 8b3af13c796e4ca6cf94 00:25:31.290 Model Number: Linux 00:25:31.290 Firmware Version: 6.7.0-68 00:25:31.290 Recommended Arb Burst: 0 00:25:31.290 IEEE OUI Identifier: 00 00 00 00:25:31.290 Multi-path I/O 00:25:31.290 May have multiple subsystem ports: No 00:25:31.290 May have multiple controllers: No 00:25:31.290 Associated with SR-IOV VF: No 00:25:31.290 Max Data Transfer Size: Unlimited 00:25:31.290 Max Number of Namespaces: 0 00:25:31.290 Max Number of I/O Queues: 1024 00:25:31.290 NVMe Specification Version (VS): 1.3 00:25:31.291 NVMe Specification Version (Identify): 1.3 00:25:31.291 Maximum Queue Entries: 1024 00:25:31.291 Contiguous Queues Required: No 00:25:31.291 Arbitration Mechanisms Supported 00:25:31.291 Weighted Round Robin: Not Supported 00:25:31.291 Vendor Specific: Not Supported 00:25:31.291 Reset Timeout: 7500 ms 00:25:31.291 Doorbell Stride: 4 bytes 00:25:31.291 NVM Subsystem Reset: Not Supported 00:25:31.291 Command Sets Supported 00:25:31.291 NVM Command Set: Supported 00:25:31.291 Boot Partition: Not Supported 00:25:31.291 Memory Page Size Minimum: 4096 bytes 00:25:31.291 Memory Page Size Maximum: 4096 bytes 00:25:31.291 Persistent Memory Region: Not Supported 00:25:31.291 Optional Asynchronous Events Supported 00:25:31.291 Namespace Attribute Notices: Not Supported 00:25:31.291 Firmware Activation Notices: Not Supported 00:25:31.291 ANA Change Notices: Not Supported 00:25:31.291 PLE Aggregate Log Change Notices: Not Supported 00:25:31.291 LBA Status Info Alert Notices: Not Supported 00:25:31.291 EGE Aggregate Log Change Notices: Not Supported 00:25:31.291 Normal NVM Subsystem Shutdown event: Not Supported 00:25:31.291 Zone Descriptor Change Notices: Not Supported 00:25:31.291 Discovery Log Change Notices: Supported 00:25:31.291 Controller Attributes 00:25:31.291 128-bit Host Identifier: Not Supported 00:25:31.291 Non-Operational Permissive Mode: Not Supported 00:25:31.291 NVM Sets: Not Supported 00:25:31.291 Read Recovery Levels: Not Supported 00:25:31.291 Endurance Groups: Not Supported 00:25:31.291 Predictable Latency Mode: Not Supported 00:25:31.291 Traffic Based Keep ALive: Not Supported 00:25:31.291 Namespace Granularity: Not Supported 00:25:31.291 SQ Associations: Not Supported 00:25:31.291 UUID List: Not Supported 00:25:31.291 Multi-Domain Subsystem: Not Supported 00:25:31.291 Fixed Capacity Management: Not Supported 00:25:31.291 Variable Capacity Management: Not Supported 00:25:31.291 Delete Endurance Group: Not Supported 00:25:31.291 Delete NVM Set: Not Supported 00:25:31.291 Extended LBA Formats Supported: Not Supported 00:25:31.291 Flexible Data Placement Supported: Not Supported 00:25:31.291 00:25:31.291 Controller Memory Buffer Support 00:25:31.291 ================================ 00:25:31.291 Supported: No 00:25:31.291 00:25:31.291 Persistent Memory Region Support 00:25:31.291 ================================ 00:25:31.291 Supported: No 00:25:31.291 00:25:31.291 Admin Command Set Attributes 00:25:31.291 ============================ 00:25:31.291 Security Send/Receive: Not Supported 00:25:31.291 Format NVM: Not Supported 00:25:31.291 Firmware Activate/Download: Not Supported 00:25:31.291 Namespace Management: Not Supported 00:25:31.291 Device Self-Test: Not Supported 00:25:31.291 Directives: Not Supported 00:25:31.291 NVMe-MI: Not Supported 00:25:31.291 Virtualization Management: Not Supported 00:25:31.291 Doorbell Buffer Config: Not Supported 00:25:31.291 Get LBA Status Capability: Not Supported 00:25:31.291 Command & Feature Lockdown Capability: Not Supported 00:25:31.291 Abort Command Limit: 1 00:25:31.291 Async Event Request Limit: 1 00:25:31.291 Number of Firmware Slots: N/A 00:25:31.291 Firmware Slot 1 Read-Only: N/A 00:25:31.550 Firmware Activation Without Reset: N/A 00:25:31.550 Multiple Update Detection Support: N/A 00:25:31.550 Firmware Update Granularity: No Information Provided 00:25:31.550 Per-Namespace SMART Log: No 00:25:31.550 Asymmetric Namespace Access Log Page: Not Supported 00:25:31.550 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:31.550 Command Effects Log Page: Not Supported 00:25:31.550 Get Log Page Extended Data: Supported 00:25:31.550 Telemetry Log Pages: Not Supported 00:25:31.550 Persistent Event Log Pages: Not Supported 00:25:31.550 Supported Log Pages Log Page: May Support 00:25:31.550 Commands Supported & Effects Log Page: Not Supported 00:25:31.550 Feature Identifiers & Effects Log Page:May Support 00:25:31.550 NVMe-MI Commands & Effects Log Page: May Support 00:25:31.550 Data Area 4 for Telemetry Log: Not Supported 00:25:31.550 Error Log Page Entries Supported: 1 00:25:31.550 Keep Alive: Not Supported 00:25:31.550 00:25:31.550 NVM Command Set Attributes 00:25:31.550 ========================== 00:25:31.550 Submission Queue Entry Size 00:25:31.550 Max: 1 00:25:31.550 Min: 1 00:25:31.550 Completion Queue Entry Size 00:25:31.550 Max: 1 00:25:31.550 Min: 1 00:25:31.550 Number of Namespaces: 0 00:25:31.550 Compare Command: Not Supported 00:25:31.550 Write Uncorrectable Command: Not Supported 00:25:31.550 Dataset Management Command: Not Supported 00:25:31.550 Write Zeroes Command: Not Supported 00:25:31.550 Set Features Save Field: Not Supported 00:25:31.550 Reservations: Not Supported 00:25:31.550 Timestamp: Not Supported 00:25:31.550 Copy: Not Supported 00:25:31.550 Volatile Write Cache: Not Present 00:25:31.550 Atomic Write Unit (Normal): 1 00:25:31.550 Atomic Write Unit (PFail): 1 00:25:31.550 Atomic Compare & Write Unit: 1 00:25:31.550 Fused Compare & Write: Not Supported 00:25:31.550 Scatter-Gather List 00:25:31.550 SGL Command Set: Supported 00:25:31.550 SGL Keyed: Not Supported 00:25:31.550 SGL Bit Bucket Descriptor: Not Supported 00:25:31.550 SGL Metadata Pointer: Not Supported 00:25:31.550 Oversized SGL: Not Supported 00:25:31.550 SGL Metadata Address: Not Supported 00:25:31.550 SGL Offset: Supported 00:25:31.550 Transport SGL Data Block: Not Supported 00:25:31.550 Replay Protected Memory Block: Not Supported 00:25:31.550 00:25:31.550 Firmware Slot Information 00:25:31.550 ========================= 00:25:31.550 Active slot: 0 00:25:31.550 00:25:31.550 00:25:31.550 Error Log 00:25:31.550 ========= 00:25:31.550 00:25:31.550 Active Namespaces 00:25:31.550 ================= 00:25:31.550 Discovery Log Page 00:25:31.550 ================== 00:25:31.550 Generation Counter: 2 00:25:31.550 Number of Records: 2 00:25:31.550 Record Format: 0 00:25:31.550 00:25:31.550 Discovery Log Entry 0 00:25:31.550 ---------------------- 00:25:31.550 Transport Type: 3 (TCP) 00:25:31.550 Address Family: 1 (IPv4) 00:25:31.550 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:31.550 Entry Flags: 00:25:31.550 Duplicate Returned Information: 0 00:25:31.550 Explicit Persistent Connection Support for Discovery: 0 00:25:31.550 Transport Requirements: 00:25:31.550 Secure Channel: Not Specified 00:25:31.550 Port ID: 1 (0x0001) 00:25:31.550 Controller ID: 65535 (0xffff) 00:25:31.550 Admin Max SQ Size: 32 00:25:31.550 Transport Service Identifier: 4420 00:25:31.550 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:31.550 Transport Address: 10.0.0.1 00:25:31.550 Discovery Log Entry 1 00:25:31.550 ---------------------- 00:25:31.551 Transport Type: 3 (TCP) 00:25:31.551 Address Family: 1 (IPv4) 00:25:31.551 Subsystem Type: 2 (NVM Subsystem) 00:25:31.551 Entry Flags: 00:25:31.551 Duplicate Returned Information: 0 00:25:31.551 Explicit Persistent Connection Support for Discovery: 0 00:25:31.551 Transport Requirements: 00:25:31.551 Secure Channel: Not Specified 00:25:31.551 Port ID: 1 (0x0001) 00:25:31.551 Controller ID: 65535 (0xffff) 00:25:31.551 Admin Max SQ Size: 32 00:25:31.551 Transport Service Identifier: 4420 00:25:31.551 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:31.551 Transport Address: 10.0.0.1 00:25:31.551 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:31.551 get_feature(0x01) failed 00:25:31.551 get_feature(0x02) failed 00:25:31.551 get_feature(0x04) failed 00:25:31.551 ===================================================== 00:25:31.551 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:31.551 ===================================================== 00:25:31.551 Controller Capabilities/Features 00:25:31.551 ================================ 00:25:31.551 Vendor ID: 0000 00:25:31.551 Subsystem Vendor ID: 0000 00:25:31.551 Serial Number: a26ca5fff8520d039130 00:25:31.551 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:31.551 Firmware Version: 6.7.0-68 00:25:31.551 Recommended Arb Burst: 6 00:25:31.551 IEEE OUI Identifier: 00 00 00 00:25:31.551 Multi-path I/O 00:25:31.551 May have multiple subsystem ports: Yes 00:25:31.551 May have multiple controllers: Yes 00:25:31.551 Associated with SR-IOV VF: No 00:25:31.551 Max Data Transfer Size: Unlimited 00:25:31.551 Max Number of Namespaces: 1024 00:25:31.551 Max Number of I/O Queues: 128 00:25:31.551 NVMe Specification Version (VS): 1.3 00:25:31.551 NVMe Specification Version (Identify): 1.3 00:25:31.551 Maximum Queue Entries: 1024 00:25:31.551 Contiguous Queues Required: No 00:25:31.551 Arbitration Mechanisms Supported 00:25:31.551 Weighted Round Robin: Not Supported 00:25:31.551 Vendor Specific: Not Supported 00:25:31.551 Reset Timeout: 7500 ms 00:25:31.551 Doorbell Stride: 4 bytes 00:25:31.551 NVM Subsystem Reset: Not Supported 00:25:31.551 Command Sets Supported 00:25:31.551 NVM Command Set: Supported 00:25:31.551 Boot Partition: Not Supported 00:25:31.551 Memory Page Size Minimum: 4096 bytes 00:25:31.551 Memory Page Size Maximum: 4096 bytes 00:25:31.551 Persistent Memory Region: Not Supported 00:25:31.551 Optional Asynchronous Events Supported 00:25:31.551 Namespace Attribute Notices: Supported 00:25:31.551 Firmware Activation Notices: Not Supported 00:25:31.551 ANA Change Notices: Supported 00:25:31.551 PLE Aggregate Log Change Notices: Not Supported 00:25:31.551 LBA Status Info Alert Notices: Not Supported 00:25:31.551 EGE Aggregate Log Change Notices: Not Supported 00:25:31.551 Normal NVM Subsystem Shutdown event: Not Supported 00:25:31.551 Zone Descriptor Change Notices: Not Supported 00:25:31.551 Discovery Log Change Notices: Not Supported 00:25:31.551 Controller Attributes 00:25:31.551 128-bit Host Identifier: Supported 00:25:31.551 Non-Operational Permissive Mode: Not Supported 00:25:31.551 NVM Sets: Not Supported 00:25:31.551 Read Recovery Levels: Not Supported 00:25:31.551 Endurance Groups: Not Supported 00:25:31.551 Predictable Latency Mode: Not Supported 00:25:31.551 Traffic Based Keep ALive: Supported 00:25:31.551 Namespace Granularity: Not Supported 00:25:31.551 SQ Associations: Not Supported 00:25:31.551 UUID List: Not Supported 00:25:31.551 Multi-Domain Subsystem: Not Supported 00:25:31.551 Fixed Capacity Management: Not Supported 00:25:31.551 Variable Capacity Management: Not Supported 00:25:31.551 Delete Endurance Group: Not Supported 00:25:31.551 Delete NVM Set: Not Supported 00:25:31.551 Extended LBA Formats Supported: Not Supported 00:25:31.551 Flexible Data Placement Supported: Not Supported 00:25:31.551 00:25:31.551 Controller Memory Buffer Support 00:25:31.551 ================================ 00:25:31.551 Supported: No 00:25:31.551 00:25:31.551 Persistent Memory Region Support 00:25:31.551 ================================ 00:25:31.551 Supported: No 00:25:31.551 00:25:31.551 Admin Command Set Attributes 00:25:31.551 ============================ 00:25:31.551 Security Send/Receive: Not Supported 00:25:31.551 Format NVM: Not Supported 00:25:31.551 Firmware Activate/Download: Not Supported 00:25:31.551 Namespace Management: Not Supported 00:25:31.551 Device Self-Test: Not Supported 00:25:31.551 Directives: Not Supported 00:25:31.551 NVMe-MI: Not Supported 00:25:31.551 Virtualization Management: Not Supported 00:25:31.551 Doorbell Buffer Config: Not Supported 00:25:31.551 Get LBA Status Capability: Not Supported 00:25:31.551 Command & Feature Lockdown Capability: Not Supported 00:25:31.551 Abort Command Limit: 4 00:25:31.551 Async Event Request Limit: 4 00:25:31.551 Number of Firmware Slots: N/A 00:25:31.551 Firmware Slot 1 Read-Only: N/A 00:25:31.551 Firmware Activation Without Reset: N/A 00:25:31.551 Multiple Update Detection Support: N/A 00:25:31.551 Firmware Update Granularity: No Information Provided 00:25:31.551 Per-Namespace SMART Log: Yes 00:25:31.551 Asymmetric Namespace Access Log Page: Supported 00:25:31.551 ANA Transition Time : 10 sec 00:25:31.551 00:25:31.551 Asymmetric Namespace Access Capabilities 00:25:31.551 ANA Optimized State : Supported 00:25:31.551 ANA Non-Optimized State : Supported 00:25:31.551 ANA Inaccessible State : Supported 00:25:31.551 ANA Persistent Loss State : Supported 00:25:31.551 ANA Change State : Supported 00:25:31.551 ANAGRPID is not changed : No 00:25:31.551 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:31.551 00:25:31.551 ANA Group Identifier Maximum : 128 00:25:31.551 Number of ANA Group Identifiers : 128 00:25:31.551 Max Number of Allowed Namespaces : 1024 00:25:31.551 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:31.551 Command Effects Log Page: Supported 00:25:31.551 Get Log Page Extended Data: Supported 00:25:31.551 Telemetry Log Pages: Not Supported 00:25:31.551 Persistent Event Log Pages: Not Supported 00:25:31.551 Supported Log Pages Log Page: May Support 00:25:31.551 Commands Supported & Effects Log Page: Not Supported 00:25:31.551 Feature Identifiers & Effects Log Page:May Support 00:25:31.551 NVMe-MI Commands & Effects Log Page: May Support 00:25:31.551 Data Area 4 for Telemetry Log: Not Supported 00:25:31.551 Error Log Page Entries Supported: 128 00:25:31.551 Keep Alive: Supported 00:25:31.551 Keep Alive Granularity: 1000 ms 00:25:31.551 00:25:31.551 NVM Command Set Attributes 00:25:31.551 ========================== 00:25:31.551 Submission Queue Entry Size 00:25:31.551 Max: 64 00:25:31.551 Min: 64 00:25:31.551 Completion Queue Entry Size 00:25:31.551 Max: 16 00:25:31.551 Min: 16 00:25:31.551 Number of Namespaces: 1024 00:25:31.551 Compare Command: Not Supported 00:25:31.551 Write Uncorrectable Command: Not Supported 00:25:31.551 Dataset Management Command: Supported 00:25:31.551 Write Zeroes Command: Supported 00:25:31.551 Set Features Save Field: Not Supported 00:25:31.551 Reservations: Not Supported 00:25:31.551 Timestamp: Not Supported 00:25:31.551 Copy: Not Supported 00:25:31.551 Volatile Write Cache: Present 00:25:31.551 Atomic Write Unit (Normal): 1 00:25:31.551 Atomic Write Unit (PFail): 1 00:25:31.551 Atomic Compare & Write Unit: 1 00:25:31.551 Fused Compare & Write: Not Supported 00:25:31.551 Scatter-Gather List 00:25:31.552 SGL Command Set: Supported 00:25:31.552 SGL Keyed: Not Supported 00:25:31.552 SGL Bit Bucket Descriptor: Not Supported 00:25:31.552 SGL Metadata Pointer: Not Supported 00:25:31.552 Oversized SGL: Not Supported 00:25:31.552 SGL Metadata Address: Not Supported 00:25:31.552 SGL Offset: Supported 00:25:31.552 Transport SGL Data Block: Not Supported 00:25:31.552 Replay Protected Memory Block: Not Supported 00:25:31.552 00:25:31.552 Firmware Slot Information 00:25:31.552 ========================= 00:25:31.552 Active slot: 0 00:25:31.552 00:25:31.552 Asymmetric Namespace Access 00:25:31.552 =========================== 00:25:31.552 Change Count : 0 00:25:31.552 Number of ANA Group Descriptors : 1 00:25:31.552 ANA Group Descriptor : 0 00:25:31.552 ANA Group ID : 1 00:25:31.552 Number of NSID Values : 1 00:25:31.552 Change Count : 0 00:25:31.552 ANA State : 1 00:25:31.552 Namespace Identifier : 1 00:25:31.552 00:25:31.552 Commands Supported and Effects 00:25:31.552 ============================== 00:25:31.552 Admin Commands 00:25:31.552 -------------- 00:25:31.552 Get Log Page (02h): Supported 00:25:31.552 Identify (06h): Supported 00:25:31.552 Abort (08h): Supported 00:25:31.552 Set Features (09h): Supported 00:25:31.552 Get Features (0Ah): Supported 00:25:31.552 Asynchronous Event Request (0Ch): Supported 00:25:31.552 Keep Alive (18h): Supported 00:25:31.552 I/O Commands 00:25:31.552 ------------ 00:25:31.552 Flush (00h): Supported 00:25:31.552 Write (01h): Supported LBA-Change 00:25:31.552 Read (02h): Supported 00:25:31.552 Write Zeroes (08h): Supported LBA-Change 00:25:31.552 Dataset Management (09h): Supported 00:25:31.552 00:25:31.552 Error Log 00:25:31.552 ========= 00:25:31.552 Entry: 0 00:25:31.552 Error Count: 0x3 00:25:31.552 Submission Queue Id: 0x0 00:25:31.552 Command Id: 0x5 00:25:31.552 Phase Bit: 0 00:25:31.552 Status Code: 0x2 00:25:31.552 Status Code Type: 0x0 00:25:31.552 Do Not Retry: 1 00:25:31.552 Error Location: 0x28 00:25:31.552 LBA: 0x0 00:25:31.552 Namespace: 0x0 00:25:31.552 Vendor Log Page: 0x0 00:25:31.552 ----------- 00:25:31.552 Entry: 1 00:25:31.552 Error Count: 0x2 00:25:31.552 Submission Queue Id: 0x0 00:25:31.552 Command Id: 0x5 00:25:31.552 Phase Bit: 0 00:25:31.552 Status Code: 0x2 00:25:31.552 Status Code Type: 0x0 00:25:31.552 Do Not Retry: 1 00:25:31.552 Error Location: 0x28 00:25:31.552 LBA: 0x0 00:25:31.552 Namespace: 0x0 00:25:31.552 Vendor Log Page: 0x0 00:25:31.552 ----------- 00:25:31.552 Entry: 2 00:25:31.552 Error Count: 0x1 00:25:31.552 Submission Queue Id: 0x0 00:25:31.552 Command Id: 0x4 00:25:31.552 Phase Bit: 0 00:25:31.552 Status Code: 0x2 00:25:31.552 Status Code Type: 0x0 00:25:31.552 Do Not Retry: 1 00:25:31.552 Error Location: 0x28 00:25:31.552 LBA: 0x0 00:25:31.552 Namespace: 0x0 00:25:31.552 Vendor Log Page: 0x0 00:25:31.552 00:25:31.552 Number of Queues 00:25:31.552 ================ 00:25:31.552 Number of I/O Submission Queues: 128 00:25:31.552 Number of I/O Completion Queues: 128 00:25:31.552 00:25:31.552 ZNS Specific Controller Data 00:25:31.552 ============================ 00:25:31.552 Zone Append Size Limit: 0 00:25:31.552 00:25:31.552 00:25:31.552 Active Namespaces 00:25:31.552 ================= 00:25:31.552 get_feature(0x05) failed 00:25:31.552 Namespace ID:1 00:25:31.552 Command Set Identifier: NVM (00h) 00:25:31.552 Deallocate: Supported 00:25:31.552 Deallocated/Unwritten Error: Not Supported 00:25:31.552 Deallocated Read Value: Unknown 00:25:31.552 Deallocate in Write Zeroes: Not Supported 00:25:31.552 Deallocated Guard Field: 0xFFFF 00:25:31.552 Flush: Supported 00:25:31.552 Reservation: Not Supported 00:25:31.552 Namespace Sharing Capabilities: Multiple Controllers 00:25:31.552 Size (in LBAs): 1310720 (5GiB) 00:25:31.552 Capacity (in LBAs): 1310720 (5GiB) 00:25:31.552 Utilization (in LBAs): 1310720 (5GiB) 00:25:31.552 UUID: 188875d4-ddc6-4680-a684-87fa0b3da1c0 00:25:31.552 Thin Provisioning: Not Supported 00:25:31.552 Per-NS Atomic Units: Yes 00:25:31.552 Atomic Boundary Size (Normal): 0 00:25:31.552 Atomic Boundary Size (PFail): 0 00:25:31.552 Atomic Boundary Offset: 0 00:25:31.552 NGUID/EUI64 Never Reused: No 00:25:31.552 ANA group ID: 1 00:25:31.552 Namespace Write Protected: No 00:25:31.552 Number of LBA Formats: 1 00:25:31.552 Current LBA Format: LBA Format #00 00:25:31.552 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:25:31.552 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:31.552 rmmod nvme_tcp 00:25:31.552 rmmod nvme_fabrics 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.552 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.811 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:31.811 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:31.811 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:31.811 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:31.811 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:31.811 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:31.811 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:31.811 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:31.811 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:31.811 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:31.811 02:26:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:32.377 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:32.377 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:32.635 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:32.635 00:25:32.635 real 0m2.719s 00:25:32.635 user 0m0.950s 00:25:32.635 sys 0m1.221s 00:25:32.635 02:26:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:32.635 ************************************ 00:25:32.635 END TEST nvmf_identify_kernel_target 00:25:32.635 ************************************ 00:25:32.635 02:26:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.636 02:26:20 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:32.636 02:26:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:32.636 02:26:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:32.636 02:26:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:32.636 ************************************ 00:25:32.636 START TEST nvmf_auth_host 00:25:32.636 ************************************ 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:32.636 * Looking for test storage... 00:25:32.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:32.636 Cannot find device "nvmf_tgt_br" 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:32.636 Cannot find device "nvmf_tgt_br2" 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:25:32.636 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:32.895 Cannot find device "nvmf_tgt_br" 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:32.895 Cannot find device "nvmf_tgt_br2" 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:32.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:32.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:32.895 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:33.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:25:33.154 00:25:33.154 --- 10.0.0.2 ping statistics --- 00:25:33.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.154 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:33.154 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:33.154 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:25:33.154 00:25:33.154 --- 10.0.0.3 ping statistics --- 00:25:33.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.154 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:33.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:25:33.154 00:25:33.154 --- 10.0.0.1 ping statistics --- 00:25:33.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.154 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=85866 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 85866 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 85866 ']' 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:33.154 02:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=38db3baa473f9c833d43490fa3d33b7d 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.PVI 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 38db3baa473f9c833d43490fa3d33b7d 0 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 38db3baa473f9c833d43490fa3d33b7d 0 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=38db3baa473f9c833d43490fa3d33b7d 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.PVI 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.PVI 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.PVI 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5bd9a280a3c60d4f0eec42f2831052aecd63fe3bef271b801d2f8c801a593d81 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6fu 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5bd9a280a3c60d4f0eec42f2831052aecd63fe3bef271b801d2f8c801a593d81 3 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5bd9a280a3c60d4f0eec42f2831052aecd63fe3bef271b801d2f8c801a593d81 3 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5bd9a280a3c60d4f0eec42f2831052aecd63fe3bef271b801d2f8c801a593d81 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:33.412 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:33.671 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6fu 00:25:33.671 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6fu 00:25:33.671 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.6fu 00:25:33.671 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:33.671 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:33.671 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c42f5e4b3a5436e28afd8ecdc95210cd253afe33fb9cf5f1 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.s0E 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c42f5e4b3a5436e28afd8ecdc95210cd253afe33fb9cf5f1 0 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c42f5e4b3a5436e28afd8ecdc95210cd253afe33fb9cf5f1 0 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c42f5e4b3a5436e28afd8ecdc95210cd253afe33fb9cf5f1 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.s0E 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.s0E 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.s0E 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=64cea822dbb77b0939585ffb9e831e2f265f9ed8ec4080af 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.E9u 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 64cea822dbb77b0939585ffb9e831e2f265f9ed8ec4080af 2 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 64cea822dbb77b0939585ffb9e831e2f265f9ed8ec4080af 2 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=64cea822dbb77b0939585ffb9e831e2f265f9ed8ec4080af 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.E9u 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.E9u 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.E9u 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ea4bd43086cab8460ed03daea1a740ff 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6d6 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ea4bd43086cab8460ed03daea1a740ff 1 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ea4bd43086cab8460ed03daea1a740ff 1 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ea4bd43086cab8460ed03daea1a740ff 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6d6 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6d6 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.6d6 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8a895192db810880f081e0524cc7ea95 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0v6 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8a895192db810880f081e0524cc7ea95 1 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8a895192db810880f081e0524cc7ea95 1 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8a895192db810880f081e0524cc7ea95 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:33.672 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0v6 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0v6 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.0v6 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=68238283a9d5fad6494bf7f70b14c27a850da777b112b374 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fDa 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 68238283a9d5fad6494bf7f70b14c27a850da777b112b374 2 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 68238283a9d5fad6494bf7f70b14c27a850da777b112b374 2 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=68238283a9d5fad6494bf7f70b14c27a850da777b112b374 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fDa 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fDa 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.fDa 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=850e0aeeabd60b0e2d6e26c96956214c 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.FMi 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 850e0aeeabd60b0e2d6e26c96956214c 0 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 850e0aeeabd60b0e2d6e26c96956214c 0 00:25:33.931 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=850e0aeeabd60b0e2d6e26c96956214c 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.FMi 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.FMi 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.FMi 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=45725bbadb74be36fc8316a23cc27387e5360d792749b6599daf1bd8784ae148 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.EbN 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 45725bbadb74be36fc8316a23cc27387e5360d792749b6599daf1bd8784ae148 3 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 45725bbadb74be36fc8316a23cc27387e5360d792749b6599daf1bd8784ae148 3 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=45725bbadb74be36fc8316a23cc27387e5360d792749b6599daf1bd8784ae148 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.EbN 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.EbN 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.EbN 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 85866 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 85866 ']' 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:33.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:33.932 02:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PVI 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.6fu ]] 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6fu 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.s0E 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.E9u ]] 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.E9u 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.6d6 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.0v6 ]] 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0v6 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.fDa 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.FMi ]] 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.FMi 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.EbN 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:34.499 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:34.500 02:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:34.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:34.758 Waiting for block devices as requested 00:25:34.758 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:34.758 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:35.325 No valid GPT data, bailing 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:35.325 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:35.325 No valid GPT data, bailing 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:35.584 No valid GPT data, bailing 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:35.584 No valid GPT data, bailing 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:35.584 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -a 10.0.0.1 -t tcp -s 4420 00:25:35.584 00:25:35.584 Discovery Log Number of Records 2, Generation counter 2 00:25:35.584 =====Discovery Log Entry 0====== 00:25:35.584 trtype: tcp 00:25:35.584 adrfam: ipv4 00:25:35.584 subtype: current discovery subsystem 00:25:35.584 treq: not specified, sq flow control disable supported 00:25:35.584 portid: 1 00:25:35.584 trsvcid: 4420 00:25:35.584 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:35.584 traddr: 10.0.0.1 00:25:35.584 eflags: none 00:25:35.584 sectype: none 00:25:35.584 =====Discovery Log Entry 1====== 00:25:35.584 trtype: tcp 00:25:35.584 adrfam: ipv4 00:25:35.585 subtype: nvme subsystem 00:25:35.585 treq: not specified, sq flow control disable supported 00:25:35.585 portid: 1 00:25:35.585 trsvcid: 4420 00:25:35.585 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:35.585 traddr: 10.0.0.1 00:25:35.585 eflags: none 00:25:35.585 sectype: none 00:25:35.585 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:35.585 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:35.585 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:35.585 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:35.585 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.585 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.585 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:35.585 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.585 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:35.585 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:35.585 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.585 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.844 nvme0n1 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.844 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.845 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.845 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: ]] 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.103 nvme0n1 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.103 02:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:36.103 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.104 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.361 nvme0n1 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: ]] 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.361 nvme0n1 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.361 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: ]] 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.618 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.619 nvme0n1 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.619 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.875 nvme0n1 00:25:36.875 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.875 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.875 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.875 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.875 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.875 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.876 02:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: ]] 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.440 nvme0n1 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.440 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.698 nvme0n1 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: ]] 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.698 nvme0n1 00:25:37.698 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.699 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.699 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.699 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.699 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: ]] 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.957 nvme0n1 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.957 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.216 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.217 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.217 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.217 02:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.217 02:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.217 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.217 02:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.217 nvme0n1 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.217 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: ]] 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.167 02:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.167 nvme0n1 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.167 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.425 nvme0n1 00:25:39.425 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.425 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.425 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.425 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.425 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.425 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: ]] 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.684 nvme0n1 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.684 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: ]] 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:39.942 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.943 nvme0n1 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.943 02:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.200 02:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.200 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.200 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.200 02:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.200 02:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.200 02:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.200 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.200 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:40.200 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.200 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.201 02:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.459 nvme0n1 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.459 02:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: ]] 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.358 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.623 nvme0n1 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.623 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.882 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.882 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.882 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.882 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.882 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.882 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.882 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.882 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.882 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.882 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.882 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.882 02:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.882 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.882 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.882 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.204 nvme0n1 00:25:43.204 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.204 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.204 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.204 02:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.204 02:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: ]] 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.204 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.205 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.205 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.205 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.205 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.205 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.205 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.205 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.205 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.205 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.205 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.205 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.205 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.205 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.205 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.205 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.464 nvme0n1 00:25:43.464 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.464 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.464 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.464 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.464 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.464 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.464 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.464 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.464 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.464 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.723 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.723 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.723 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:43.723 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.723 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.723 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.723 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.723 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: ]] 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.724 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.982 nvme0n1 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.982 02:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.550 nvme0n1 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: ]] 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:44.550 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.551 02:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.118 nvme0n1 00:25:45.118 02:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.118 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.685 nvme0n1 00:25:45.685 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.685 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.685 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.943 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: ]] 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.944 02:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.511 nvme0n1 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: ]] 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.511 02:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.447 nvme0n1 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.447 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.053 nvme0n1 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: ]] 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.053 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.054 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.054 02:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.054 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.054 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.054 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.054 nvme0n1 00:25:48.054 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.054 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.054 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.054 02:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.054 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.054 02:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.054 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.313 nvme0n1 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: ]] 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.313 nvme0n1 00:25:48.313 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.571 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.571 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.571 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.571 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.571 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.571 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.571 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.571 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.571 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.571 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.571 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.571 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:48.571 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: ]] 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.572 nvme0n1 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.572 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.830 nvme0n1 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: ]] 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.830 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.831 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.831 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.831 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.831 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.088 nvme0n1 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:49.088 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.089 02:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.089 nvme0n1 00:25:49.089 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.089 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.089 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.089 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.089 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.089 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: ]] 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.347 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.348 nvme0n1 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: ]] 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.348 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.606 nvme0n1 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:49.606 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.607 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.865 nvme0n1 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: ]] 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.865 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.123 nvme0n1 00:25:50.123 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.123 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.123 02:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.123 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.123 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.123 02:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.123 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.382 nvme0n1 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: ]] 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.382 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.639 nvme0n1 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: ]] 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.639 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.953 nvme0n1 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.953 02:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.211 nvme0n1 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:51.211 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: ]] 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.212 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.777 nvme0n1 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.777 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.035 nvme0n1 00:25:52.035 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.035 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.035 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.035 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.035 02:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.035 02:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: ]] 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.035 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:52.036 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.036 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.654 nvme0n1 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: ]] 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.654 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.937 nvme0n1 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.937 02:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.502 nvme0n1 00:25:53.502 02:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.502 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.502 02:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.502 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.502 02:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.502 02:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.502 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.502 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.502 02:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.502 02:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.502 02:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.502 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.502 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.502 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:53.502 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: ]] 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.503 02:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.069 nvme0n1 00:25:54.069 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.069 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.069 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.070 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.329 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.329 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.329 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.329 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.329 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.329 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.329 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.896 nvme0n1 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: ]] 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.896 02:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.831 nvme0n1 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: ]] 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.831 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.832 02:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.832 02:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.832 02:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.832 02:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.832 02:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.832 02:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.832 02:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.832 02:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.832 02:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.832 02:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.832 02:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:55.832 02:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.832 02:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.397 nvme0n1 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.397 02:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.330 nvme0n1 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: ]] 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.330 nvme0n1 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:57.330 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.331 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.588 nvme0n1 00:25:57.588 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.588 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.588 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: ]] 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.589 nvme0n1 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.589 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: ]] 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.848 nvme0n1 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.848 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.849 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.849 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.849 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.849 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.849 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.849 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.849 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.849 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.107 nvme0n1 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: ]] 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.107 02:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.107 nvme0n1 00:25:58.107 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.107 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.107 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.107 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.107 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.107 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.107 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.107 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.107 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.107 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.366 nvme0n1 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: ]] 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.366 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.624 nvme0n1 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: ]] 00:25:58.624 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.625 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.882 nvme0n1 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:58.882 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.883 nvme0n1 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.883 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: ]] 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.140 02:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.399 nvme0n1 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.399 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.658 nvme0n1 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: ]] 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.658 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.659 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.659 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.659 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.659 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.917 nvme0n1 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: ]] 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.917 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.176 nvme0n1 00:26:00.176 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.176 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.176 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.176 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.176 02:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.176 02:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.176 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.434 nvme0n1 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: ]] 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:00.434 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.435 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.693 nvme0n1 00:26:00.693 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.693 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.693 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.693 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.693 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.693 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.951 02:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.209 nvme0n1 00:26:01.209 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.209 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.209 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.209 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: ]] 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.210 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.469 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.469 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.469 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.469 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.469 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.469 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.469 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.469 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.469 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.469 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.469 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.469 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.469 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.469 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.726 nvme0n1 00:26:01.726 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.726 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.726 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.726 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.726 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.726 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.726 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.726 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.726 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.726 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: ]] 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.983 02:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.241 nvme0n1 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.241 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.805 nvme0n1 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkYjNiYWE0NzNmOWM4MzNkNDM0OTBmYTNkMzNiN2SlwPKl: 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: ]] 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJkOWEyODBhM2M2MGQ0ZjBlZWM0MmYyODMxMDUyYWVjZDYzZmUzYmVmMjcxYjgwMWQyZjhjODAxYTU5M2Q4Mel5UjA=: 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.805 02:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.740 nvme0n1 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.740 02:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.305 nvme0n1 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE0YmQ0MzA4NmNhYjg0NjBlZDAzZGFlYTFhNzQwZmaE186F: 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: ]] 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4OTUxOTJkYjgxMDg4MGYwODFlMDUyNGNjN2VhOTWuAMh0: 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.305 02:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.238 nvme0n1 00:26:05.238 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.238 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.238 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.238 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.238 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.238 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.238 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjgyMzgyODNhOWQ1ZmFkNjQ5NGJmN2Y3MGIxNGMyN2E4NTBkYTc3N2IxMTJiMzc05wTE/g==: 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: ]] 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODUwZTBhZWVhYmQ2MGIwZTJkNmUyNmM5Njk1NjIxNGNw/ZV2: 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.239 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.175 nvme0n1 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU3MjViYmFkYjc0YmUzNmZjODMxNmEyM2NjMjczODdlNTM2MGQ3OTI3NDliNjU5OWRhZjFiZDg3ODRhZTE0ODyjYEI=: 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.175 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.176 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.176 02:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.176 02:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.176 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.176 02:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.748 nvme0n1 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.748 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyZjVlNGIzYTU0MzZlMjhhZmQ4ZWNkYzk1MjEwY2QyNTNhZmUzM2ZiOWNmNWYxVWIG0w==: 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: ]] 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjZWE4MjJkYmI3N2IwOTM5NTg1ZmZiOWU4MzFlMmYyNjVmOWVkOGVjNDA4MGFmkbJk8w==: 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.749 2024/05/15 02:26:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:26:06.749 request: 00:26:06.749 { 00:26:06.749 "method": "bdev_nvme_attach_controller", 00:26:06.749 "params": { 00:26:06.749 "name": "nvme0", 00:26:06.749 "trtype": "tcp", 00:26:06.749 "traddr": "10.0.0.1", 00:26:06.749 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:06.749 "adrfam": "ipv4", 00:26:06.749 "trsvcid": "4420", 00:26:06.749 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:26:06.749 } 00:26:06.749 } 00:26:06.749 Got JSON-RPC error response 00:26:06.749 GoRPCClient: error on JSON-RPC call 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.749 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.006 2024/05/15 02:26:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:26:07.006 request: 00:26:07.006 { 00:26:07.006 "method": "bdev_nvme_attach_controller", 00:26:07.007 "params": { 00:26:07.007 "name": "nvme0", 00:26:07.007 "trtype": "tcp", 00:26:07.007 "traddr": "10.0.0.1", 00:26:07.007 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:07.007 "adrfam": "ipv4", 00:26:07.007 "trsvcid": "4420", 00:26:07.007 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:07.007 "dhchap_key": "key2" 00:26:07.007 } 00:26:07.007 } 00:26:07.007 Got JSON-RPC error response 00:26:07.007 GoRPCClient: error on JSON-RPC call 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.007 2024/05/15 02:26:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey2 dhchap_key:key1 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:26:07.007 request: 00:26:07.007 { 00:26:07.007 "method": "bdev_nvme_attach_controller", 00:26:07.007 "params": { 00:26:07.007 "name": "nvme0", 00:26:07.007 "trtype": "tcp", 00:26:07.007 "traddr": "10.0.0.1", 00:26:07.007 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:07.007 "adrfam": "ipv4", 00:26:07.007 "trsvcid": "4420", 00:26:07.007 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:07.007 "dhchap_key": "key1", 00:26:07.007 "dhchap_ctrlr_key": "ckey2" 00:26:07.007 } 00:26:07.007 } 00:26:07.007 Got JSON-RPC error response 00:26:07.007 GoRPCClient: error on JSON-RPC call 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:07.007 rmmod nvme_tcp 00:26:07.007 rmmod nvme_fabrics 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 85866 ']' 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 85866 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 85866 ']' 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 85866 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85866 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:07.007 killing process with pid 85866 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85866' 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 85866 00:26:07.007 02:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 85866 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:07.266 02:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:07.831 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:08.089 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:08.089 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:08.089 02:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.PVI /tmp/spdk.key-null.s0E /tmp/spdk.key-sha256.6d6 /tmp/spdk.key-sha384.fDa /tmp/spdk.key-sha512.EbN /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:26:08.089 02:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:08.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:08.348 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:08.348 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:08.348 00:26:08.348 real 0m35.759s 00:26:08.348 user 0m31.765s 00:26:08.348 sys 0m3.176s 00:26:08.348 02:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:08.348 ************************************ 00:26:08.348 END TEST nvmf_auth_host 00:26:08.348 ************************************ 00:26:08.348 02:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.348 02:26:56 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:26:08.348 02:26:56 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:08.348 02:26:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:08.348 02:26:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:08.348 02:26:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:08.348 ************************************ 00:26:08.348 START TEST nvmf_digest 00:26:08.348 ************************************ 00:26:08.348 02:26:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:08.605 * Looking for test storage... 00:26:08.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.605 02:26:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:08.606 Cannot find device "nvmf_tgt_br" 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:08.606 Cannot find device "nvmf_tgt_br2" 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:08.606 Cannot find device "nvmf_tgt_br" 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:08.606 Cannot find device "nvmf_tgt_br2" 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:08.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:08.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:08.606 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:08.864 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:08.864 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:08.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:26:08.865 00:26:08.865 --- 10.0.0.2 ping statistics --- 00:26:08.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.865 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:08.865 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:08.865 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:26:08.865 00:26:08.865 --- 10.0.0.3 ping statistics --- 00:26:08.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.865 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:08.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:26:08.865 00:26:08.865 --- 10.0.0.1 ping statistics --- 00:26:08.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.865 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:08.865 ************************************ 00:26:08.865 START TEST nvmf_digest_clean 00:26:08.865 ************************************ 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=87240 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 87240 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 87240 ']' 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:08.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:08.865 02:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:08.865 [2024-05-15 02:26:56.849351] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:26:08.865 [2024-05-15 02:26:56.849494] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.122 [2024-05-15 02:26:56.991176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.123 [2024-05-15 02:26:57.072519] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.123 [2024-05-15 02:26:57.072586] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.123 [2024-05-15 02:26:57.072606] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.123 [2024-05-15 02:26:57.072618] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.123 [2024-05-15 02:26:57.072627] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.123 [2024-05-15 02:26:57.072657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.053 02:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:10.053 02:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:26:10.053 02:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:10.053 02:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:10.053 02:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:10.053 02:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.053 02:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:10.054 02:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:10.054 02:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:10.054 02:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.054 02:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:10.054 null0 00:26:10.054 [2024-05-15 02:26:57.991097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.054 [2024-05-15 02:26:58.015030] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:10.054 [2024-05-15 02:26:58.015288] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87284 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87284 /var/tmp/bperf.sock 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 87284 ']' 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:10.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:10.054 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:10.311 [2024-05-15 02:26:58.083340] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:26:10.311 [2024-05-15 02:26:58.083476] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87284 ] 00:26:10.311 [2024-05-15 02:26:58.251632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.567 [2024-05-15 02:26:58.336161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.567 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:10.567 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:26:10.567 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:10.567 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:10.567 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:10.826 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.826 02:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:11.389 nvme0n1 00:26:11.389 02:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:11.389 02:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:11.646 Running I/O for 2 seconds... 00:26:13.544 00:26:13.544 Latency(us) 00:26:13.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.544 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:13.544 nvme0n1 : 2.00 17700.67 69.14 0.00 0.00 7223.07 3470.43 19422.49 00:26:13.544 =================================================================================================================== 00:26:13.544 Total : 17700.67 69.14 0.00 0.00 7223.07 3470.43 19422.49 00:26:13.544 0 00:26:13.544 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:13.544 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:13.544 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:13.544 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:13.544 | select(.opcode=="crc32c") 00:26:13.544 | "\(.module_name) \(.executed)"' 00:26:13.544 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:13.802 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:13.803 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:13.803 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:13.803 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:13.803 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87284 00:26:13.803 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 87284 ']' 00:26:13.803 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 87284 00:26:13.803 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:26:13.803 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:13.803 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87284 00:26:13.803 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:13.803 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:13.803 killing process with pid 87284 00:26:13.803 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87284' 00:26:13.803 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 87284 00:26:13.803 Received shutdown signal, test time was about 2.000000 seconds 00:26:13.803 00:26:13.803 Latency(us) 00:26:13.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.803 =================================================================================================================== 00:26:13.803 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:13.803 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 87284 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87338 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87338 /var/tmp/bperf.sock 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 87338 ']' 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:14.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:14.061 02:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.061 [2024-05-15 02:27:02.058594] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:26:14.061 [2024-05-15 02:27:02.058710] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87338 ] 00:26:14.061 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:14.061 Zero copy mechanism will not be used. 00:26:14.318 [2024-05-15 02:27:02.198982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.318 [2024-05-15 02:27:02.271456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.576 02:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:14.576 02:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:26:14.576 02:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:14.576 02:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:14.576 02:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:14.850 02:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.850 02:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.435 nvme0n1 00:26:15.436 02:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:15.436 02:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:15.436 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:15.436 Zero copy mechanism will not be used. 00:26:15.436 Running I/O for 2 seconds... 00:26:17.959 00:26:17.959 Latency(us) 00:26:17.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.959 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:17.959 nvme0n1 : 2.00 7360.09 920.01 0.00 0.00 2169.50 618.12 8340.95 00:26:17.959 =================================================================================================================== 00:26:17.959 Total : 7360.09 920.01 0.00 0.00 2169.50 618.12 8340.95 00:26:17.959 0 00:26:17.959 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:17.959 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:17.959 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:17.960 | select(.opcode=="crc32c") 00:26:17.960 | "\(.module_name) \(.executed)"' 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87338 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 87338 ']' 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 87338 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87338 00:26:17.960 killing process with pid 87338 00:26:17.960 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.960 00:26:17.960 Latency(us) 00:26:17.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.960 =================================================================================================================== 00:26:17.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87338' 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 87338 00:26:17.960 02:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 87338 00:26:18.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87397 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87397 /var/tmp/bperf.sock 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 87397 ']' 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:18.217 02:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.217 [2024-05-15 02:27:06.053051] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:26:18.217 [2024-05-15 02:27:06.053320] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87397 ] 00:26:18.217 [2024-05-15 02:27:06.185872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.474 [2024-05-15 02:27:06.245864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.038 02:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:19.038 02:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:26:19.038 02:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:19.038 02:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:19.038 02:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:19.602 02:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.602 02:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.860 nvme0n1 00:26:19.860 02:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:19.860 02:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:20.117 Running I/O for 2 seconds... 00:26:22.011 00:26:22.011 Latency(us) 00:26:22.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.011 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:22.011 nvme0n1 : 2.01 20943.70 81.81 0.00 0.00 6101.70 2532.07 14894.55 00:26:22.011 =================================================================================================================== 00:26:22.011 Total : 20943.70 81.81 0.00 0.00 6101.70 2532.07 14894.55 00:26:22.011 0 00:26:22.011 02:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:22.011 02:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:22.011 02:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:22.011 02:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:22.011 02:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:22.011 | select(.opcode=="crc32c") 00:26:22.011 | "\(.module_name) \(.executed)"' 00:26:22.268 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:22.268 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:22.268 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:22.268 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:22.268 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87397 00:26:22.268 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 87397 ']' 00:26:22.268 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 87397 00:26:22.268 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:26:22.268 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:22.268 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87397 00:26:22.526 killing process with pid 87397 00:26:22.526 Received shutdown signal, test time was about 2.000000 seconds 00:26:22.526 00:26:22.526 Latency(us) 00:26:22.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.526 =================================================================================================================== 00:26:22.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87397' 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 87397 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 87397 00:26:22.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87458 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87458 /var/tmp/bperf.sock 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 87458 ']' 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:22.526 02:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:22.526 [2024-05-15 02:27:10.514990] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:26:22.526 [2024-05-15 02:27:10.515314] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87458 ] 00:26:22.526 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:22.526 Zero copy mechanism will not be used. 00:26:22.801 [2024-05-15 02:27:10.646678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.801 [2024-05-15 02:27:10.735306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.739 02:27:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:23.739 02:27:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:26:23.739 02:27:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:23.739 02:27:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:23.739 02:27:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:23.997 02:27:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:23.997 02:27:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.350 nvme0n1 00:26:24.350 02:27:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:24.350 02:27:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:24.350 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:24.350 Zero copy mechanism will not be used. 00:26:24.350 Running I/O for 2 seconds... 00:26:26.879 00:26:26.879 Latency(us) 00:26:26.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.879 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:26.879 nvme0n1 : 2.00 5523.91 690.49 0.00 0.00 2887.76 2040.55 6047.19 00:26:26.879 =================================================================================================================== 00:26:26.879 Total : 5523.91 690.49 0.00 0.00 2887.76 2040.55 6047.19 00:26:26.879 0 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:26.879 | select(.opcode=="crc32c") 00:26:26.879 | "\(.module_name) \(.executed)"' 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87458 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 87458 ']' 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 87458 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87458 00:26:26.879 killing process with pid 87458 00:26:26.879 Received shutdown signal, test time was about 2.000000 seconds 00:26:26.879 00:26:26.879 Latency(us) 00:26:26.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.879 =================================================================================================================== 00:26:26.879 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87458' 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 87458 00:26:26.879 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 87458 00:26:27.138 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 87240 00:26:27.138 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 87240 ']' 00:26:27.138 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 87240 00:26:27.138 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:26:27.138 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:27.138 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87240 00:26:27.138 killing process with pid 87240 00:26:27.138 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:27.138 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:27.138 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87240' 00:26:27.138 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 87240 00:26:27.138 [2024-05-15 02:27:14.944489] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:27.138 02:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 87240 00:26:27.138 ************************************ 00:26:27.138 END TEST nvmf_digest_clean 00:26:27.138 ************************************ 00:26:27.138 00:26:27.138 real 0m18.370s 00:26:27.138 user 0m35.994s 00:26:27.138 sys 0m4.473s 00:26:27.138 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:27.138 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:27.396 ************************************ 00:26:27.396 START TEST nvmf_digest_error 00:26:27.396 ************************************ 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=87547 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 87547 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 87547 ']' 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:27.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:27.396 02:27:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.396 [2024-05-15 02:27:15.264668] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:26:27.396 [2024-05-15 02:27:15.265050] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.396 [2024-05-15 02:27:15.407062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.653 [2024-05-15 02:27:15.492249] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.653 [2024-05-15 02:27:15.492331] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.653 [2024-05-15 02:27:15.492354] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.653 [2024-05-15 02:27:15.492371] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.653 [2024-05-15 02:27:15.492412] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.653 [2024-05-15 02:27:15.492464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.586 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:28.586 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:26:28.586 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.586 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:28.586 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.586 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.586 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:28.586 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.586 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.586 [2024-05-15 02:27:16.361091] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:28.586 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.586 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:28.586 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:28.586 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.587 null0 00:26:28.587 [2024-05-15 02:27:16.432013] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.587 [2024-05-15 02:27:16.455957] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:28.587 [2024-05-15 02:27:16.456225] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87585 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87585 /var/tmp/bperf.sock 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 87585 ']' 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:28.587 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.587 [2024-05-15 02:27:16.521198] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:26:28.587 [2024-05-15 02:27:16.521641] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87585 ] 00:26:28.845 [2024-05-15 02:27:16.661023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.845 [2024-05-15 02:27:16.731591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.845 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:28.845 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:26:28.845 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:28.845 02:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.421 02:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:29.421 02:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.421 02:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.421 02:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.422 02:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.422 02:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.679 nvme0n1 00:26:29.679 02:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:29.679 02:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.679 02:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.679 02:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.679 02:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:29.679 02:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:29.937 Running I/O for 2 seconds... 00:26:29.937 [2024-05-15 02:27:17.760575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:29.937 [2024-05-15 02:27:17.760648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.937 [2024-05-15 02:27:17.760665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.937 [2024-05-15 02:27:17.775938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:29.937 [2024-05-15 02:27:17.776004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.937 [2024-05-15 02:27:17.776021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.937 [2024-05-15 02:27:17.790808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:29.937 [2024-05-15 02:27:17.790876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.937 [2024-05-15 02:27:17.790892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.937 [2024-05-15 02:27:17.805931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:29.937 [2024-05-15 02:27:17.806005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.937 [2024-05-15 02:27:17.806022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.937 [2024-05-15 02:27:17.818508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:29.937 [2024-05-15 02:27:17.818582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.937 [2024-05-15 02:27:17.818598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.937 [2024-05-15 02:27:17.833534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:29.937 [2024-05-15 02:27:17.833614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.937 [2024-05-15 02:27:17.833631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.937 [2024-05-15 02:27:17.849180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:29.937 [2024-05-15 02:27:17.849250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.937 [2024-05-15 02:27:17.849267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.937 [2024-05-15 02:27:17.863858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:29.937 [2024-05-15 02:27:17.863926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.937 [2024-05-15 02:27:17.863949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.937 [2024-05-15 02:27:17.878742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:29.937 [2024-05-15 02:27:17.878817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.937 [2024-05-15 02:27:17.878833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.937 [2024-05-15 02:27:17.894047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:29.938 [2024-05-15 02:27:17.894126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.938 [2024-05-15 02:27:17.894142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.938 [2024-05-15 02:27:17.908245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:29.938 [2024-05-15 02:27:17.908316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.938 [2024-05-15 02:27:17.908332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.938 [2024-05-15 02:27:17.923059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:29.938 [2024-05-15 02:27:17.923134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.938 [2024-05-15 02:27:17.923152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.938 [2024-05-15 02:27:17.938052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:29.938 [2024-05-15 02:27:17.938127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.938 [2024-05-15 02:27:17.938144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.195 [2024-05-15 02:27:17.953162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.195 [2024-05-15 02:27:17.953236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.195 [2024-05-15 02:27:17.953253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.195 [2024-05-15 02:27:17.966651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.195 [2024-05-15 02:27:17.966723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.195 [2024-05-15 02:27:17.966739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.195 [2024-05-15 02:27:17.979859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.195 [2024-05-15 02:27:17.979930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.195 [2024-05-15 02:27:17.979946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.195 [2024-05-15 02:27:17.995651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.195 [2024-05-15 02:27:17.995720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.195 [2024-05-15 02:27:17.995736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.195 [2024-05-15 02:27:18.010643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.195 [2024-05-15 02:27:18.010717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.195 [2024-05-15 02:27:18.010733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.195 [2024-05-15 02:27:18.024831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.195 [2024-05-15 02:27:18.024906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.195 [2024-05-15 02:27:18.024923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.195 [2024-05-15 02:27:18.038891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.195 [2024-05-15 02:27:18.038965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.195 [2024-05-15 02:27:18.038981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.195 [2024-05-15 02:27:18.052057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.195 [2024-05-15 02:27:18.052131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.195 [2024-05-15 02:27:18.052148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.195 [2024-05-15 02:27:18.067528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.196 [2024-05-15 02:27:18.067601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.196 [2024-05-15 02:27:18.067617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.196 [2024-05-15 02:27:18.082412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.196 [2024-05-15 02:27:18.082478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.196 [2024-05-15 02:27:18.082494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.196 [2024-05-15 02:27:18.097213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.196 [2024-05-15 02:27:18.097282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.196 [2024-05-15 02:27:18.097298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.196 [2024-05-15 02:27:18.112327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.196 [2024-05-15 02:27:18.112407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.196 [2024-05-15 02:27:18.112425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.196 [2024-05-15 02:27:18.124980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.196 [2024-05-15 02:27:18.125048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.196 [2024-05-15 02:27:18.125065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.196 [2024-05-15 02:27:18.141462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.196 [2024-05-15 02:27:18.141538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.196 [2024-05-15 02:27:18.141555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.196 [2024-05-15 02:27:18.156604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.196 [2024-05-15 02:27:18.156679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.196 [2024-05-15 02:27:18.156696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.196 [2024-05-15 02:27:18.168940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.196 [2024-05-15 02:27:18.169018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.196 [2024-05-15 02:27:18.169035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.196 [2024-05-15 02:27:18.184974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.196 [2024-05-15 02:27:18.185049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.196 [2024-05-15 02:27:18.185065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.196 [2024-05-15 02:27:18.197825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.196 [2024-05-15 02:27:18.197891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.196 [2024-05-15 02:27:18.197907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.212683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.212758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.212781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.227515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.227589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.227606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.241735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.241834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.241852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.254155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.254230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.254247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.269708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.269796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.269814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.286304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.286380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.286410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.298558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.298624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.298639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.311629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.311691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.311707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.327672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.327741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.327757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.342214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.342282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.342304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.356614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.356679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.356694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.371055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.371119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.371135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.385030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.385093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.385109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.401664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.401729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.401745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.414602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.414664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.414679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.428603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.428665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.428681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.443117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.443181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.443197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.454 [2024-05-15 02:27:18.458685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.454 [2024-05-15 02:27:18.458758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.454 [2024-05-15 02:27:18.458774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.712 [2024-05-15 02:27:18.473575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.712 [2024-05-15 02:27:18.473646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.712 [2024-05-15 02:27:18.473662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.712 [2024-05-15 02:27:18.487772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.712 [2024-05-15 02:27:18.487839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.712 [2024-05-15 02:27:18.487855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.712 [2024-05-15 02:27:18.502495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.712 [2024-05-15 02:27:18.502555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.712 [2024-05-15 02:27:18.502571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.712 [2024-05-15 02:27:18.518197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.712 [2024-05-15 02:27:18.518263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.712 [2024-05-15 02:27:18.518279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.712 [2024-05-15 02:27:18.530417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.712 [2024-05-15 02:27:18.530478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.712 [2024-05-15 02:27:18.530494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.712 [2024-05-15 02:27:18.545786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.712 [2024-05-15 02:27:18.545861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.712 [2024-05-15 02:27:18.545878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.712 [2024-05-15 02:27:18.560231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.712 [2024-05-15 02:27:18.560304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.712 [2024-05-15 02:27:18.560320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.712 [2024-05-15 02:27:18.575275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.712 [2024-05-15 02:27:18.575351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.712 [2024-05-15 02:27:18.575368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.712 [2024-05-15 02:27:18.590463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.712 [2024-05-15 02:27:18.590525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.712 [2024-05-15 02:27:18.590542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.712 [2024-05-15 02:27:18.605986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.712 [2024-05-15 02:27:18.606065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.712 [2024-05-15 02:27:18.606092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.713 [2024-05-15 02:27:18.620893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.713 [2024-05-15 02:27:18.620986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.713 [2024-05-15 02:27:18.621012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.713 [2024-05-15 02:27:18.637122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.713 [2024-05-15 02:27:18.637223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.713 [2024-05-15 02:27:18.637250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.713 [2024-05-15 02:27:18.653042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.713 [2024-05-15 02:27:18.653132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.713 [2024-05-15 02:27:18.653162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.713 [2024-05-15 02:27:18.668781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.713 [2024-05-15 02:27:18.668867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.713 [2024-05-15 02:27:18.668895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.713 [2024-05-15 02:27:18.684489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.713 [2024-05-15 02:27:18.684577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.713 [2024-05-15 02:27:18.684605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.713 [2024-05-15 02:27:18.700244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.713 [2024-05-15 02:27:18.700323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.713 [2024-05-15 02:27:18.700347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.713 [2024-05-15 02:27:18.716180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.713 [2024-05-15 02:27:18.716264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.713 [2024-05-15 02:27:18.716290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.732149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.732231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.732256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.747338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.747437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.747463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.762342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.762434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.762458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.777973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.778070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.778096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.793695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.793803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.793827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.809302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.809428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.809455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.825300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.825402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.825427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.841045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.841138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.841162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.856792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.856889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.856913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.872226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.872320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.872345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.888337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.888449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.888474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.904112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.904193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.904216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.918687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.918753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.918769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.933312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.933396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.933415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.947641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.947706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.947722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.962759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.962824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.962840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.972 [2024-05-15 02:27:18.976546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:30.972 [2024-05-15 02:27:18.976610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.972 [2024-05-15 02:27:18.976627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:18.989621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:18.989690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:18.989706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:19.005113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:19.005203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:19.005222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:19.018440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:19.018515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:19.018531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:19.034093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:19.034167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:19.034183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:19.048664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:19.048733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:19.048749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:19.064873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:19.064955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:19.064972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:19.082972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:19.083056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:19.083073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:19.099239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:19.099322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:19.099339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:19.117269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:19.117354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:19.117371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:19.134773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:19.134851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:19.134867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:19.149928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:19.150005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:19.150022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:19.170404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:19.170497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:19.170514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:19.188865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:19.188961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:19.188979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:19.204735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:19.204824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:19.204842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:19.224837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:19.224951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:19.224981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.231 [2024-05-15 02:27:19.243044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.231 [2024-05-15 02:27:19.243128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.231 [2024-05-15 02:27:19.243146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.489 [2024-05-15 02:27:19.257185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.489 [2024-05-15 02:27:19.257264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.489 [2024-05-15 02:27:19.257281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.489 [2024-05-15 02:27:19.272662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.489 [2024-05-15 02:27:19.272738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.489 [2024-05-15 02:27:19.272755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.489 [2024-05-15 02:27:19.287117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.489 [2024-05-15 02:27:19.287200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.489 [2024-05-15 02:27:19.287216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.489 [2024-05-15 02:27:19.301611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.489 [2024-05-15 02:27:19.301685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.489 [2024-05-15 02:27:19.301701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.489 [2024-05-15 02:27:19.315467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.489 [2024-05-15 02:27:19.315540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.490 [2024-05-15 02:27:19.315556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.490 [2024-05-15 02:27:19.331315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.490 [2024-05-15 02:27:19.331443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.490 [2024-05-15 02:27:19.331472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.490 [2024-05-15 02:27:19.345935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.490 [2024-05-15 02:27:19.346009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.490 [2024-05-15 02:27:19.346026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.490 [2024-05-15 02:27:19.359130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.490 [2024-05-15 02:27:19.359200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.490 [2024-05-15 02:27:19.359217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.490 [2024-05-15 02:27:19.374529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.490 [2024-05-15 02:27:19.374599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.490 [2024-05-15 02:27:19.374616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.490 [2024-05-15 02:27:19.389089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.490 [2024-05-15 02:27:19.389159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.490 [2024-05-15 02:27:19.389175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.490 [2024-05-15 02:27:19.403556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.490 [2024-05-15 02:27:19.403624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.490 [2024-05-15 02:27:19.403640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.490 [2024-05-15 02:27:19.418217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.490 [2024-05-15 02:27:19.418294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.490 [2024-05-15 02:27:19.418310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.490 [2024-05-15 02:27:19.433199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.490 [2024-05-15 02:27:19.433298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.490 [2024-05-15 02:27:19.433324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.490 [2024-05-15 02:27:19.448899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.490 [2024-05-15 02:27:19.448976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.490 [2024-05-15 02:27:19.448993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.490 [2024-05-15 02:27:19.463884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.490 [2024-05-15 02:27:19.463957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.490 [2024-05-15 02:27:19.463973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.490 [2024-05-15 02:27:19.478769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.490 [2024-05-15 02:27:19.478842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.490 [2024-05-15 02:27:19.478859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.490 [2024-05-15 02:27:19.492531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.490 [2024-05-15 02:27:19.492603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.490 [2024-05-15 02:27:19.492619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.748 [2024-05-15 02:27:19.507752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.748 [2024-05-15 02:27:19.507824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.748 [2024-05-15 02:27:19.507840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.748 [2024-05-15 02:27:19.522406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.748 [2024-05-15 02:27:19.522476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.748 [2024-05-15 02:27:19.522492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.748 [2024-05-15 02:27:19.537856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.748 [2024-05-15 02:27:19.537925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.748 [2024-05-15 02:27:19.537942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.748 [2024-05-15 02:27:19.552620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.748 [2024-05-15 02:27:19.552692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.748 [2024-05-15 02:27:19.552709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.748 [2024-05-15 02:27:19.565490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.748 [2024-05-15 02:27:19.565558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.748 [2024-05-15 02:27:19.565574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.748 [2024-05-15 02:27:19.581271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.748 [2024-05-15 02:27:19.581345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.748 [2024-05-15 02:27:19.581362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.748 [2024-05-15 02:27:19.593449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.748 [2024-05-15 02:27:19.593521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.748 [2024-05-15 02:27:19.593536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.749 [2024-05-15 02:27:19.608751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.749 [2024-05-15 02:27:19.608830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.749 [2024-05-15 02:27:19.608846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.749 [2024-05-15 02:27:19.624266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.749 [2024-05-15 02:27:19.624336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.749 [2024-05-15 02:27:19.624352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.749 [2024-05-15 02:27:19.641731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.749 [2024-05-15 02:27:19.641821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.749 [2024-05-15 02:27:19.641837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.749 [2024-05-15 02:27:19.657066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.749 [2024-05-15 02:27:19.657135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.749 [2024-05-15 02:27:19.657152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.749 [2024-05-15 02:27:19.669688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.749 [2024-05-15 02:27:19.669756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.749 [2024-05-15 02:27:19.669784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.749 [2024-05-15 02:27:19.684842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.749 [2024-05-15 02:27:19.684910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.749 [2024-05-15 02:27:19.684926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.749 [2024-05-15 02:27:19.699017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.749 [2024-05-15 02:27:19.699084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.749 [2024-05-15 02:27:19.699100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.749 [2024-05-15 02:27:19.717522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.749 [2024-05-15 02:27:19.717636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.749 [2024-05-15 02:27:19.717663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.749 [2024-05-15 02:27:19.737050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fea9d0) 00:26:31.749 [2024-05-15 02:27:19.737156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.749 [2024-05-15 02:27:19.737179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.749 00:26:31.749 Latency(us) 00:26:31.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.749 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:31.749 nvme0n1 : 2.01 16891.02 65.98 0.00 0.00 7565.99 3813.00 22163.08 00:26:31.749 =================================================================================================================== 00:26:31.749 Total : 16891.02 65.98 0.00 0.00 7565.99 3813.00 22163.08 00:26:31.749 0 00:26:32.007 02:27:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:32.007 02:27:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:32.007 02:27:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:32.007 | .driver_specific 00:26:32.007 | .nvme_error 00:26:32.007 | .status_code 00:26:32.007 | .command_transient_transport_error' 00:26:32.007 02:27:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:32.264 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 132 > 0 )) 00:26:32.264 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87585 00:26:32.264 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 87585 ']' 00:26:32.264 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 87585 00:26:32.264 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:26:32.264 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:32.264 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87585 00:26:32.264 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:32.264 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:32.264 killing process with pid 87585 00:26:32.264 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87585' 00:26:32.264 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 87585 00:26:32.264 Received shutdown signal, test time was about 2.000000 seconds 00:26:32.264 00:26:32.264 Latency(us) 00:26:32.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.264 =================================================================================================================== 00:26:32.264 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:32.264 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 87585 00:26:32.521 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:32.521 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:32.521 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:32.521 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:32.521 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:32.521 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87638 00:26:32.521 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87638 /var/tmp/bperf.sock 00:26:32.521 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 87638 ']' 00:26:32.521 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:32.521 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:32.521 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:32.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:32.521 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:32.521 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:32.521 02:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.521 [2024-05-15 02:27:20.464087] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:26:32.521 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:32.521 Zero copy mechanism will not be used. 00:26:32.521 [2024-05-15 02:27:20.464228] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87638 ] 00:26:32.778 [2024-05-15 02:27:20.611967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.778 [2024-05-15 02:27:20.703302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.709 02:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:33.709 02:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:26:33.709 02:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:33.709 02:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:33.966 02:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:33.966 02:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.966 02:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.966 02:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.966 02:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:33.966 02:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:34.224 nvme0n1 00:26:34.224 02:27:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:34.224 02:27:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.224 02:27:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.224 02:27:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.224 02:27:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:34.224 02:27:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:34.483 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:34.483 Zero copy mechanism will not be used. 00:26:34.483 Running I/O for 2 seconds... 00:26:34.483 [2024-05-15 02:27:22.354970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.483 [2024-05-15 02:27:22.355045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.483 [2024-05-15 02:27:22.355062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.483 [2024-05-15 02:27:22.360568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.483 [2024-05-15 02:27:22.360625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.483 [2024-05-15 02:27:22.360640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.483 [2024-05-15 02:27:22.365127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.483 [2024-05-15 02:27:22.365182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.483 [2024-05-15 02:27:22.365197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.483 [2024-05-15 02:27:22.369368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.483 [2024-05-15 02:27:22.369432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.483 [2024-05-15 02:27:22.369447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.483 [2024-05-15 02:27:22.373514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.483 [2024-05-15 02:27:22.373567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.483 [2024-05-15 02:27:22.373582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.483 [2024-05-15 02:27:22.379102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.483 [2024-05-15 02:27:22.379168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.483 [2024-05-15 02:27:22.379183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.483 [2024-05-15 02:27:22.382663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.483 [2024-05-15 02:27:22.382716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.483 [2024-05-15 02:27:22.382730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.483 [2024-05-15 02:27:22.387237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.483 [2024-05-15 02:27:22.387288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.387303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.392747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.392805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.392821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.398110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.398164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.398179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.402830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.402882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.402897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.405900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.405944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.405958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.411633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.411684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.411699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.416505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.416554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.416569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.419911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.419958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.419972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.424661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.424712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.424727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.429929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.429983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.429999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.433564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.433612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.433627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.438199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.438250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.438265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.443610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.443665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.443680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.447113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.447163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.447178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.451733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.451787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.451802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.456191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.456242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.456257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.460450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.460500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.460514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.464606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.464654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.464668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.469465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.469521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.469536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.473057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.473115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.473130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.478114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.478170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.478184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.483206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.483258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.483272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.488206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.484 [2024-05-15 02:27:22.488258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.484 [2024-05-15 02:27:22.488272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.484 [2024-05-15 02:27:22.491968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.485 [2024-05-15 02:27:22.492016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.485 [2024-05-15 02:27:22.492031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.485 [2024-05-15 02:27:22.496510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.485 [2024-05-15 02:27:22.496565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.485 [2024-05-15 02:27:22.496580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.744 [2024-05-15 02:27:22.502163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.744 [2024-05-15 02:27:22.502214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.744 [2024-05-15 02:27:22.502229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.744 [2024-05-15 02:27:22.506787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.744 [2024-05-15 02:27:22.506836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.744 [2024-05-15 02:27:22.506850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.744 [2024-05-15 02:27:22.510233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.744 [2024-05-15 02:27:22.510279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.510293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.514763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.514818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.514833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.519283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.519337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.519352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.526333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.526414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.526439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.533058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.533136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.533158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.537955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.538019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.538041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.544722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.544796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.544820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.551512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.551578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.551601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.558418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.558496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.558521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.565479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.565554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.565579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.572653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.572728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.572751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.579588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.579664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.579685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.586721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.586798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.586819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.593826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.593901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.593925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.600804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.600881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.600904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.607888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.607970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.607994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.614771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.614857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.614881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.621447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.621531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.621553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.628461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.628538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.628562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.635008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.635085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.635110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.642559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.642632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.642654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.648895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.745 [2024-05-15 02:27:22.648968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.745 [2024-05-15 02:27:22.648990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.745 [2024-05-15 02:27:22.655809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.655885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.655910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.746 [2024-05-15 02:27:22.663062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.663137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.663161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.746 [2024-05-15 02:27:22.669808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.669886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.669910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.746 [2024-05-15 02:27:22.676809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.676875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.676898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.746 [2024-05-15 02:27:22.683726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.683804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.683823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.746 [2024-05-15 02:27:22.690509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.690584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.690607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.746 [2024-05-15 02:27:22.694984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.695049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.695070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.746 [2024-05-15 02:27:22.701199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.701269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.701291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.746 [2024-05-15 02:27:22.707898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.707970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.707990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.746 [2024-05-15 02:27:22.714845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.714915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.714939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.746 [2024-05-15 02:27:22.721638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.721706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.721730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.746 [2024-05-15 02:27:22.728608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.728685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.728708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.746 [2024-05-15 02:27:22.735196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.735265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.735288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.746 [2024-05-15 02:27:22.741737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.741816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.741838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.746 [2024-05-15 02:27:22.748897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.748966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.748988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.746 [2024-05-15 02:27:22.755875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:34.746 [2024-05-15 02:27:22.755948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.746 [2024-05-15 02:27:22.755970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.762849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.762918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.762939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.769630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.769701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.769723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.776539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.776619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.776644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.783226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.783301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.783325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.790174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.790246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.790269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.797252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.797329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.797352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.804210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.804281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.804303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.811214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.811291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.811313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.818206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.818287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.818310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.824984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.825071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.825097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.832173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.832266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.832288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.839171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.839262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.839288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.845609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.845686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.845709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.849852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.849918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.849940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.856433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.856499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.856519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.861597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.861650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.861666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.866567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.866624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.866639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.872761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.872846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.872874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.877659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.877721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.877737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.881528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.881576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.881591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.886518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.886571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.886587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.891967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.892019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.892034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.896925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.896977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.896992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.005 [2024-05-15 02:27:22.899980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.005 [2024-05-15 02:27:22.900025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.005 [2024-05-15 02:27:22.900039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.904977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.905028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.905044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.909358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.909423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.909438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.913928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.913977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.913993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.918628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.918677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.918692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.922717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.922765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.922779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.927647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.927699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.927714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.932040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.932091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.932105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.936148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.936196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.936210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.940595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.940646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.940661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.945309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.945363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.945378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.949123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.949170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.949184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.953801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.953848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.953863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.958522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.958570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.958585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.962762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.962813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.962828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.966708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.966756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.966771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.970822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.970869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.970884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.975689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.975739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.975754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.979778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.979828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.979844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.983459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.983511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.983526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.988110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.988166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.988181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.993149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.993202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.993217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:22.998550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:22.998609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:22.998624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:23.001658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:23.001709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:23.001723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:23.007197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:23.007261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:23.007277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:23.010844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:23.010898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:23.010915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.006 [2024-05-15 02:27:23.015020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.006 [2024-05-15 02:27:23.015079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.006 [2024-05-15 02:27:23.015095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.265 [2024-05-15 02:27:23.020864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.265 [2024-05-15 02:27:23.020924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.265 [2024-05-15 02:27:23.020939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.265 [2024-05-15 02:27:23.025867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.265 [2024-05-15 02:27:23.025921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.265 [2024-05-15 02:27:23.025936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.265 [2024-05-15 02:27:23.030971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.265 [2024-05-15 02:27:23.031030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.265 [2024-05-15 02:27:23.031045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.265 [2024-05-15 02:27:23.036514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.265 [2024-05-15 02:27:23.036571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.265 [2024-05-15 02:27:23.036586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.265 [2024-05-15 02:27:23.040909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.265 [2024-05-15 02:27:23.040965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.265 [2024-05-15 02:27:23.040980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.265 [2024-05-15 02:27:23.046419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.265 [2024-05-15 02:27:23.046473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.265 [2024-05-15 02:27:23.046489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.265 [2024-05-15 02:27:23.052630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.265 [2024-05-15 02:27:23.052692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.265 [2024-05-15 02:27:23.052708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.265 [2024-05-15 02:27:23.056838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.265 [2024-05-15 02:27:23.056890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.265 [2024-05-15 02:27:23.056905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.062113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.062167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.062182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.068199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.068254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.068270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.073238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.073298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.073313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.077668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.077727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.077742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.084034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.084099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.084122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.088214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.088262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.088278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.093153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.093200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.093214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.099055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.099101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.099116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.103861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.103912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.103926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.108505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.108550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.108564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.113972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.114021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.114036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.118299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.118348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.118363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.123349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.123412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.123429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.128314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.128362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.128376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.134278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.134326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.134341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.138519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.138567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.138582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.143299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.143349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.143364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.148696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.148749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.148765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.154515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.154568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.154582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.159339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.159399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.159415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.164793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.164838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.164853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.169409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.169454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.169469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.174571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.174624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.174638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.179977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.180025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.180041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.185419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.185466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.185480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.190474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.190519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.190534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.196132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.196181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.196196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.266 [2024-05-15 02:27:23.201059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.266 [2024-05-15 02:27:23.201104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.266 [2024-05-15 02:27:23.201117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.267 [2024-05-15 02:27:23.205735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.267 [2024-05-15 02:27:23.205788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.267 [2024-05-15 02:27:23.205804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.267 [2024-05-15 02:27:23.211365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.267 [2024-05-15 02:27:23.211429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.267 [2024-05-15 02:27:23.211444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.267 [2024-05-15 02:27:23.216325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.267 [2024-05-15 02:27:23.216381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.267 [2024-05-15 02:27:23.216411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.267 [2024-05-15 02:27:23.222335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.267 [2024-05-15 02:27:23.222408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.267 [2024-05-15 02:27:23.222425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.267 [2024-05-15 02:27:23.227381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.267 [2024-05-15 02:27:23.227449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.267 [2024-05-15 02:27:23.227465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.267 [2024-05-15 02:27:23.232276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.267 [2024-05-15 02:27:23.232331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.267 [2024-05-15 02:27:23.232346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.267 [2024-05-15 02:27:23.236851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.267 [2024-05-15 02:27:23.236898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.267 [2024-05-15 02:27:23.236913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.267 [2024-05-15 02:27:23.242874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.267 [2024-05-15 02:27:23.242925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.267 [2024-05-15 02:27:23.242940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.267 [2024-05-15 02:27:23.247403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.267 [2024-05-15 02:27:23.247449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.267 [2024-05-15 02:27:23.247464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.267 [2024-05-15 02:27:23.252027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.267 [2024-05-15 02:27:23.252086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.267 [2024-05-15 02:27:23.252108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.267 [2024-05-15 02:27:23.257236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.267 [2024-05-15 02:27:23.257285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.267 [2024-05-15 02:27:23.257299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.267 [2024-05-15 02:27:23.263163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.267 [2024-05-15 02:27:23.263212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.267 [2024-05-15 02:27:23.263226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.267 [2024-05-15 02:27:23.268528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.267 [2024-05-15 02:27:23.268576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.267 [2024-05-15 02:27:23.268591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.267 [2024-05-15 02:27:23.273636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.267 [2024-05-15 02:27:23.273685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.267 [2024-05-15 02:27:23.273699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.267 [2024-05-15 02:27:23.279216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.267 [2024-05-15 02:27:23.279264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.267 [2024-05-15 02:27:23.279279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.530 [2024-05-15 02:27:23.284123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.530 [2024-05-15 02:27:23.284172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.530 [2024-05-15 02:27:23.284187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.530 [2024-05-15 02:27:23.288876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.530 [2024-05-15 02:27:23.288923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.530 [2024-05-15 02:27:23.288938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.530 [2024-05-15 02:27:23.294951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.530 [2024-05-15 02:27:23.295000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.530 [2024-05-15 02:27:23.295015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.530 [2024-05-15 02:27:23.300128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.530 [2024-05-15 02:27:23.300185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.530 [2024-05-15 02:27:23.300200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.530 [2024-05-15 02:27:23.305404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.530 [2024-05-15 02:27:23.305450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.530 [2024-05-15 02:27:23.305464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.530 [2024-05-15 02:27:23.310956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.531 [2024-05-15 02:27:23.311006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.531 [2024-05-15 02:27:23.311020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.531 [2024-05-15 02:27:23.316084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.531 [2024-05-15 02:27:23.316143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.531 [2024-05-15 02:27:23.316158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.531 [2024-05-15 02:27:23.321886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.531 [2024-05-15 02:27:23.321931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.531 [2024-05-15 02:27:23.321945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.531 [2024-05-15 02:27:23.326822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.531 [2024-05-15 02:27:23.326870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.531 [2024-05-15 02:27:23.326885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.531 [2024-05-15 02:27:23.331824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.531 [2024-05-15 02:27:23.331869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.531 [2024-05-15 02:27:23.331883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.531 [2024-05-15 02:27:23.336798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.531 [2024-05-15 02:27:23.336846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.531 [2024-05-15 02:27:23.336861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.531 [2024-05-15 02:27:23.342773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.531 [2024-05-15 02:27:23.342828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.531 [2024-05-15 02:27:23.342843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.531 [2024-05-15 02:27:23.347437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.531 [2024-05-15 02:27:23.347489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.532 [2024-05-15 02:27:23.347504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.532 [2024-05-15 02:27:23.353186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.532 [2024-05-15 02:27:23.353241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.532 [2024-05-15 02:27:23.353257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.532 [2024-05-15 02:27:23.357886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.532 [2024-05-15 02:27:23.357940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.532 [2024-05-15 02:27:23.357955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.532 [2024-05-15 02:27:23.363217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.532 [2024-05-15 02:27:23.363272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.532 [2024-05-15 02:27:23.363287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.532 [2024-05-15 02:27:23.368678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.532 [2024-05-15 02:27:23.368730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.532 [2024-05-15 02:27:23.368745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.532 [2024-05-15 02:27:23.373914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.532 [2024-05-15 02:27:23.373963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.532 [2024-05-15 02:27:23.373978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.532 [2024-05-15 02:27:23.379096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.532 [2024-05-15 02:27:23.379145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.532 [2024-05-15 02:27:23.379160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.532 [2024-05-15 02:27:23.384646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.532 [2024-05-15 02:27:23.384693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.532 [2024-05-15 02:27:23.384708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.532 [2024-05-15 02:27:23.389458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.532 [2024-05-15 02:27:23.389507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.532 [2024-05-15 02:27:23.389521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.532 [2024-05-15 02:27:23.394561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.533 [2024-05-15 02:27:23.394611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.533 [2024-05-15 02:27:23.394625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.533 [2024-05-15 02:27:23.399904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.533 [2024-05-15 02:27:23.399955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.533 [2024-05-15 02:27:23.399970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.533 [2024-05-15 02:27:23.405830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.533 [2024-05-15 02:27:23.405886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.533 [2024-05-15 02:27:23.405901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.533 [2024-05-15 02:27:23.410752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.533 [2024-05-15 02:27:23.410801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.533 [2024-05-15 02:27:23.410816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.533 [2024-05-15 02:27:23.416523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.533 [2024-05-15 02:27:23.416576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.533 [2024-05-15 02:27:23.416591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.533 [2024-05-15 02:27:23.420811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.533 [2024-05-15 02:27:23.420860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.533 [2024-05-15 02:27:23.420874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.533 [2024-05-15 02:27:23.425210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.533 [2024-05-15 02:27:23.425268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.533 [2024-05-15 02:27:23.425283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.533 [2024-05-15 02:27:23.430219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.533 [2024-05-15 02:27:23.430275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.533 [2024-05-15 02:27:23.430290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.533 [2024-05-15 02:27:23.434131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.534 [2024-05-15 02:27:23.434183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.534 [2024-05-15 02:27:23.434198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.534 [2024-05-15 02:27:23.439325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.534 [2024-05-15 02:27:23.439379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.534 [2024-05-15 02:27:23.439409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.534 [2024-05-15 02:27:23.444776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.534 [2024-05-15 02:27:23.444835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.534 [2024-05-15 02:27:23.444850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.534 [2024-05-15 02:27:23.449701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.534 [2024-05-15 02:27:23.449757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.534 [2024-05-15 02:27:23.449771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.534 [2024-05-15 02:27:23.452943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.534 [2024-05-15 02:27:23.452993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.534 [2024-05-15 02:27:23.453008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.534 [2024-05-15 02:27:23.457716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.534 [2024-05-15 02:27:23.457769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.534 [2024-05-15 02:27:23.457800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.534 [2024-05-15 02:27:23.462837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.534 [2024-05-15 02:27:23.462890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.534 [2024-05-15 02:27:23.462906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.534 [2024-05-15 02:27:23.467142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.534 [2024-05-15 02:27:23.467192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.534 [2024-05-15 02:27:23.467207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.534 [2024-05-15 02:27:23.471609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.534 [2024-05-15 02:27:23.471659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.534 [2024-05-15 02:27:23.471673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.534 [2024-05-15 02:27:23.476287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.534 [2024-05-15 02:27:23.476345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.534 [2024-05-15 02:27:23.476360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.534 [2024-05-15 02:27:23.481230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.534 [2024-05-15 02:27:23.481296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.534 [2024-05-15 02:27:23.481310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.534 [2024-05-15 02:27:23.486151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.535 [2024-05-15 02:27:23.486207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.535 [2024-05-15 02:27:23.486221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.535 [2024-05-15 02:27:23.490279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.535 [2024-05-15 02:27:23.490333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.535 [2024-05-15 02:27:23.490347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.535 [2024-05-15 02:27:23.495814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.535 [2024-05-15 02:27:23.495882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.535 [2024-05-15 02:27:23.495897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.535 [2024-05-15 02:27:23.500781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.535 [2024-05-15 02:27:23.500838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.535 [2024-05-15 02:27:23.500853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.535 [2024-05-15 02:27:23.504125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.535 [2024-05-15 02:27:23.504174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.535 [2024-05-15 02:27:23.504190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.535 [2024-05-15 02:27:23.508522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.535 [2024-05-15 02:27:23.508573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.535 [2024-05-15 02:27:23.508588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.535 [2024-05-15 02:27:23.514056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.535 [2024-05-15 02:27:23.514135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.535 [2024-05-15 02:27:23.514159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.535 [2024-05-15 02:27:23.519987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.535 [2024-05-15 02:27:23.520063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.535 [2024-05-15 02:27:23.520087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.535 [2024-05-15 02:27:23.524503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.535 [2024-05-15 02:27:23.524581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.535 [2024-05-15 02:27:23.524611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.535 [2024-05-15 02:27:23.530751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.535 [2024-05-15 02:27:23.530835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.535 [2024-05-15 02:27:23.530861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.535 [2024-05-15 02:27:23.536575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.535 [2024-05-15 02:27:23.536669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.536 [2024-05-15 02:27:23.536698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.536 [2024-05-15 02:27:23.541896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.536 [2024-05-15 02:27:23.541966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.536 [2024-05-15 02:27:23.541981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.546879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.546937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.546952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.552881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.552944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.552960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.557308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.557367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.557411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.561155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.561210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.561240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.566048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.566125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.566161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.571530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.571597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.571622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.576966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.577037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.577062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.580327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.580400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.580425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.585468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.585542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.585567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.590672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.590738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.590761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.596295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.596382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.596425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.599938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.600024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.600050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.606096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.606184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.606209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.609759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.609859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.609883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.614261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.614325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.614350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.618995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.619054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.619077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.799 [2024-05-15 02:27:23.622846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.799 [2024-05-15 02:27:23.622909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.799 [2024-05-15 02:27:23.622933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.627093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.627178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.627202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.630843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.630920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.630943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.635637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.635730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.635754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.639777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.639860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.639883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.645021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.645106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.645130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.650785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.650876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.650901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.655812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.655898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.655922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.660152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.660245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.660268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.664480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.664559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.664594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.669764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.669864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.669888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.674812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.674905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.674930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.681208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.681299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.681322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.686623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.686714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.686740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.691125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.691191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.691216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.694432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.694486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.694510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.699158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.699223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.699249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.703772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.703828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.703851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.708184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.708251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.708277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.713307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.713405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.713434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.717754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.717826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.717843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.722890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.722969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.722984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.727675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.727755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.727779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.731542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.731611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.731626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.736323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.736378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.736411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.740888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.740941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.740956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.800 [2024-05-15 02:27:23.745632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.800 [2024-05-15 02:27:23.745684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.800 [2024-05-15 02:27:23.745699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.801 [2024-05-15 02:27:23.749267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.801 [2024-05-15 02:27:23.749317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.801 [2024-05-15 02:27:23.749332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.801 [2024-05-15 02:27:23.754415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.801 [2024-05-15 02:27:23.754476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.801 [2024-05-15 02:27:23.754491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.801 [2024-05-15 02:27:23.759353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.801 [2024-05-15 02:27:23.759422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.801 [2024-05-15 02:27:23.759438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.801 [2024-05-15 02:27:23.763011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.801 [2024-05-15 02:27:23.763060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.801 [2024-05-15 02:27:23.763075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.801 [2024-05-15 02:27:23.767683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.801 [2024-05-15 02:27:23.767766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.801 [2024-05-15 02:27:23.767783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.801 [2024-05-15 02:27:23.772310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.801 [2024-05-15 02:27:23.772410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.801 [2024-05-15 02:27:23.772428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.801 [2024-05-15 02:27:23.776979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.801 [2024-05-15 02:27:23.777050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.801 [2024-05-15 02:27:23.777066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.801 [2024-05-15 02:27:23.780785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.801 [2024-05-15 02:27:23.780838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.801 [2024-05-15 02:27:23.780853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.801 [2024-05-15 02:27:23.785724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.801 [2024-05-15 02:27:23.785806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.801 [2024-05-15 02:27:23.785824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.801 [2024-05-15 02:27:23.790596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.801 [2024-05-15 02:27:23.790663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.801 [2024-05-15 02:27:23.790678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.801 [2024-05-15 02:27:23.794667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.801 [2024-05-15 02:27:23.794732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.801 [2024-05-15 02:27:23.794748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.801 [2024-05-15 02:27:23.799901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.801 [2024-05-15 02:27:23.799967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.801 [2024-05-15 02:27:23.799982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.801 [2024-05-15 02:27:23.805362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.801 [2024-05-15 02:27:23.805441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.801 [2024-05-15 02:27:23.805457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.801 [2024-05-15 02:27:23.810762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:35.801 [2024-05-15 02:27:23.810832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.801 [2024-05-15 02:27:23.810847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.059 [2024-05-15 02:27:23.814070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.059 [2024-05-15 02:27:23.814123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.059 [2024-05-15 02:27:23.814137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.059 [2024-05-15 02:27:23.818158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.059 [2024-05-15 02:27:23.818206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.059 [2024-05-15 02:27:23.818220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.059 [2024-05-15 02:27:23.823305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.059 [2024-05-15 02:27:23.823363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.059 [2024-05-15 02:27:23.823379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.059 [2024-05-15 02:27:23.828779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.059 [2024-05-15 02:27:23.828838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.059 [2024-05-15 02:27:23.828853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.059 [2024-05-15 02:27:23.833898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.059 [2024-05-15 02:27:23.833969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.059 [2024-05-15 02:27:23.833984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.059 [2024-05-15 02:27:23.836834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.059 [2024-05-15 02:27:23.836893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.059 [2024-05-15 02:27:23.836912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.059 [2024-05-15 02:27:23.843936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.059 [2024-05-15 02:27:23.844060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.844090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.850630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.850709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.850725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.855516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.855593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.855609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.860237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.860332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.860362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.866433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.866519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.866540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.871867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.871930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.871946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.878592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.878675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.878698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.883134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.883218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.883240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.889234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.889342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.889364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.895082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.895146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.895161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.900958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.901021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.901037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.904728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.904783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.904799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.910105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.910165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.910180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.914487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.914544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.914559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.919675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.919727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.919742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.923090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.923137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.923151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.927364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.927431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.927447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.931733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.931792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.931807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.936588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.936650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.936664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.940709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.940768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.940782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.945479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.945539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.945554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.950674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.950762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.950787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.956681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.956756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.956780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.962595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.962666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.962682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.967796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.967858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.967874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.971560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.971655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.060 [2024-05-15 02:27:23.971691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.060 [2024-05-15 02:27:23.977095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.060 [2024-05-15 02:27:23.977165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:23.977195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:23.982587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:23.982650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:23.982666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:23.988506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:23.988575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:23.988591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:23.993909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:23.993978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:23.993994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:23.998525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:23.998609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:23.998636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:24.003719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:24.003780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:24.003795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:24.009263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:24.009332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:24.009348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:24.012309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:24.012356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:24.012371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:24.017855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:24.017930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:24.017945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:24.023932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:24.024033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:24.024060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:24.029636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:24.029708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:24.029725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:24.033938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:24.034008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:24.034024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:24.038622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:24.038688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:24.038704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:24.045233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:24.045339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:24.045367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:24.048932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:24.049015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:24.049043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:24.055585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:24.055653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:24.055671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:24.060734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:24.060803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:24.060820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:24.065080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:24.065140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:24.065155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:24.070220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:24.070276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:24.070292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.061 [2024-05-15 02:27:24.074352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.061 [2024-05-15 02:27:24.074415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.061 [2024-05-15 02:27:24.074430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.078804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.078857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.078871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.083347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.083412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.083428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.087728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.087780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.087793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.091828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.091887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.091902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.096804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.096880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.096906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.103461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.103523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.103539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.108869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.108926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.108942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.112570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.112641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.112665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.119759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.119844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.119865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.124931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.124991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.125007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.129101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.129161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.129176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.133848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.133907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.133923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.140690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.140795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.140821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.145346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.145441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.145471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.152662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.152762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.152787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.159267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.159343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.159360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.164751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.164859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.164884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.170308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.170415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.170444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.178300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.178430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.178460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.183930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.183997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.184014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.188542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.188602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.188624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.194196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.194259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.194275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.200203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.200266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.200283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.203793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.203840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.203856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.209533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.209592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.209607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.214224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.214279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.214294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.219468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.219523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.321 [2024-05-15 02:27:24.219539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.321 [2024-05-15 02:27:24.223632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.321 [2024-05-15 02:27:24.223683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.223698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.228714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.228769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.228784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.233550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.233619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.233636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.238743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.238800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.238815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.243521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.243578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.243594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.247943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.248009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.248024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.253066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.253133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.253148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.258569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.258644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.258665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.264432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.264529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.264555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.270885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.270973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.270995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.276133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.276220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.276244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.281463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.281536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.281558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.287282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.287361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.287399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.292667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.292746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.292768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.299791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.299883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.299906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.305334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.305450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.305475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.310465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.310553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.310577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.316784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.316870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.316892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.322678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.322748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.322764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.326896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.326956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.326971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.322 [2024-05-15 02:27:24.332711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.322 [2024-05-15 02:27:24.332788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.322 [2024-05-15 02:27:24.332806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.580 [2024-05-15 02:27:24.337268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.580 [2024-05-15 02:27:24.337337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.580 [2024-05-15 02:27:24.337353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.580 [2024-05-15 02:27:24.342565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.580 [2024-05-15 02:27:24.342627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.580 [2024-05-15 02:27:24.342643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.580 [2024-05-15 02:27:24.347578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7e18b0) 00:26:36.580 [2024-05-15 02:27:24.347635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.580 [2024-05-15 02:27:24.347651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.580 00:26:36.580 Latency(us) 00:26:36.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.580 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:36.580 nvme0n1 : 2.00 5969.47 746.18 0.00 0.00 2675.32 681.43 8519.68 00:26:36.580 =================================================================================================================== 00:26:36.580 Total : 5969.47 746.18 0.00 0.00 2675.32 681.43 8519.68 00:26:36.580 0 00:26:36.580 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:36.580 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:36.580 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:36.580 | .driver_specific 00:26:36.580 | .nvme_error 00:26:36.580 | .status_code 00:26:36.580 | .command_transient_transport_error' 00:26:36.580 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:36.838 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 385 > 0 )) 00:26:36.838 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87638 00:26:36.838 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 87638 ']' 00:26:36.838 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 87638 00:26:36.838 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:26:36.838 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:36.838 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87638 00:26:36.838 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:36.838 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:36.839 killing process with pid 87638 00:26:36.839 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87638' 00:26:36.839 Received shutdown signal, test time was about 2.000000 seconds 00:26:36.839 00:26:36.839 Latency(us) 00:26:36.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.839 =================================================================================================================== 00:26:36.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:36.839 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 87638 00:26:36.839 02:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 87638 00:26:37.096 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:37.096 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:37.096 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:37.096 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:37.096 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:37.096 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87699 00:26:37.096 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:37.096 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87699 /var/tmp/bperf.sock 00:26:37.096 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 87699 ']' 00:26:37.096 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:37.096 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:37.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:37.096 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:37.096 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:37.096 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:37.096 [2024-05-15 02:27:25.064951] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:26:37.096 [2024-05-15 02:27:25.065042] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87699 ] 00:26:37.352 [2024-05-15 02:27:25.195721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.352 [2024-05-15 02:27:25.280089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.352 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:37.352 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:26:37.352 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:37.352 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:37.916 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:37.916 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.916 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:37.916 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.916 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:37.916 02:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:38.483 nvme0n1 00:26:38.483 02:27:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:38.483 02:27:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.483 02:27:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.483 02:27:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.483 02:27:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:38.483 02:27:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:38.483 Running I/O for 2 seconds... 00:26:38.483 [2024-05-15 02:27:26.403191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f6458 00:26:38.483 [2024-05-15 02:27:26.404380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.483 [2024-05-15 02:27:26.404449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:38.483 [2024-05-15 02:27:26.415247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e7818 00:26:38.483 [2024-05-15 02:27:26.416211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.483 [2024-05-15 02:27:26.416251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:38.483 [2024-05-15 02:27:26.426597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190ee190 00:26:38.483 [2024-05-15 02:27:26.427365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.483 [2024-05-15 02:27:26.427413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:38.483 [2024-05-15 02:27:26.440057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e8d30 00:26:38.483 [2024-05-15 02:27:26.441358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.483 [2024-05-15 02:27:26.441409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:38.483 [2024-05-15 02:27:26.452305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fc998 00:26:38.483 [2024-05-15 02:27:26.453704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.483 [2024-05-15 02:27:26.453742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:38.483 [2024-05-15 02:27:26.464128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f9f68 00:26:38.483 [2024-05-15 02:27:26.465300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.483 [2024-05-15 02:27:26.465340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:38.483 [2024-05-15 02:27:26.480180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e6300 00:26:38.483 [2024-05-15 02:27:26.482146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.483 [2024-05-15 02:27:26.482188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:38.483 [2024-05-15 02:27:26.488954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f0ff8 00:26:38.483 [2024-05-15 02:27:26.489944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.483 [2024-05-15 02:27:26.489982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:38.756 [2024-05-15 02:27:26.503976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f46d0 00:26:38.756 [2024-05-15 02:27:26.505675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.505718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.515229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f96f8 00:26:38.757 [2024-05-15 02:27:26.517234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.517276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.528040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fdeb0 00:26:38.757 [2024-05-15 02:27:26.529112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.529151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.539412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e0ea0 00:26:38.757 [2024-05-15 02:27:26.540841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.540882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.551909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f1ca0 00:26:38.757 [2024-05-15 02:27:26.553003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.553064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.567178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e0630 00:26:38.757 [2024-05-15 02:27:26.569084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.569141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.580443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190ebb98 00:26:38.757 [2024-05-15 02:27:26.582430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.582490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.590269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e95a0 00:26:38.757 [2024-05-15 02:27:26.591379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.591462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.607006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f5378 00:26:38.757 [2024-05-15 02:27:26.608807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.608853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.619293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e27f0 00:26:38.757 [2024-05-15 02:27:26.620799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.620845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.631552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e6738 00:26:38.757 [2024-05-15 02:27:26.632941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.632984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.642988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f6cc8 00:26:38.757 [2024-05-15 02:27:26.644259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.644306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.655521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f9b30 00:26:38.757 [2024-05-15 02:27:26.657004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.657052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.671350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f35f0 00:26:38.757 [2024-05-15 02:27:26.673751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.673832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.682867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e1710 00:26:38.757 [2024-05-15 02:27:26.683932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.683993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.702857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e3d08 00:26:38.757 [2024-05-15 02:27:26.704371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.704447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.718890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e38d0 00:26:38.757 [2024-05-15 02:27:26.720442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.720501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.736307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f31b8 00:26:38.757 [2024-05-15 02:27:26.737434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.737480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.752683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f4298 00:26:38.757 [2024-05-15 02:27:26.753750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.753802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:38.757 [2024-05-15 02:27:26.768281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e7818 00:26:38.757 [2024-05-15 02:27:26.769317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.757 [2024-05-15 02:27:26.769358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:26.786847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fda78 00:26:39.016 [2024-05-15 02:27:26.788720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:26.788791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:26.802656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f3a28 00:26:39.016 [2024-05-15 02:27:26.804155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:26.804218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:26.819426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f4f40 00:26:39.016 [2024-05-15 02:27:26.821092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:26.821140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:26.836233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fc560 00:26:39.016 [2024-05-15 02:27:26.838105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:26.838170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:26.851828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e5a90 00:26:39.016 [2024-05-15 02:27:26.853125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:26.853186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:26.867380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190ebb98 00:26:39.016 [2024-05-15 02:27:26.869059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:26.869104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:26.883153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190ea248 00:26:39.016 [2024-05-15 02:27:26.884854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:26.884899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:26.899275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e9168 00:26:39.016 [2024-05-15 02:27:26.901005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:26.901048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:26.914841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f1430 00:26:39.016 [2024-05-15 02:27:26.916539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:26.916579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:26.929515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f1430 00:26:39.016 [2024-05-15 02:27:26.930840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:26.930882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:26.940997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190de8a8 00:26:39.016 [2024-05-15 02:27:26.942404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:26.942442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:26.953701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f0bc0 00:26:39.016 [2024-05-15 02:27:26.955327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:26.955372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:26.965103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fcdd0 00:26:39.016 [2024-05-15 02:27:26.966372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:26.966422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:26.976810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190de8a8 00:26:39.016 [2024-05-15 02:27:26.977987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:26.978026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:26.988170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f2948 00:26:39.016 [2024-05-15 02:27:26.989168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:26.989206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:27.002652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f81e0 00:26:39.016 [2024-05-15 02:27:27.004490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.016 [2024-05-15 02:27:27.004537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:39.016 [2024-05-15 02:27:27.011359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190ff3c8 00:26:39.016 [2024-05-15 02:27:27.012253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.017 [2024-05-15 02:27:27.012295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.017 [2024-05-15 02:27:27.026406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fe2e8 00:26:39.017 [2024-05-15 02:27:27.027969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.017 [2024-05-15 02:27:27.028018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:39.274 [2024-05-15 02:27:27.037972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fb8b8 00:26:39.274 [2024-05-15 02:27:27.039363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.274 [2024-05-15 02:27:27.039417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.049800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fa7d8 00:26:39.275 [2024-05-15 02:27:27.051072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.051130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.064744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e3498 00:26:39.275 [2024-05-15 02:27:27.066713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.066777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.073572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e5220 00:26:39.275 [2024-05-15 02:27:27.074543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.074593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.088232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fdeb0 00:26:39.275 [2024-05-15 02:27:27.089721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.089768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.099971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f8a50 00:26:39.275 [2024-05-15 02:27:27.101265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.101313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.112433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190eb328 00:26:39.275 [2024-05-15 02:27:27.114056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.114103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.123675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190eea00 00:26:39.275 [2024-05-15 02:27:27.125101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.125146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.135625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f96f8 00:26:39.275 [2024-05-15 02:27:27.137027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.137069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.150247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190ebb98 00:26:39.275 [2024-05-15 02:27:27.152348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.152410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.159119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e6fa8 00:26:39.275 [2024-05-15 02:27:27.160173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.160216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.173849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e4140 00:26:39.275 [2024-05-15 02:27:27.175582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.175625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.185214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fdeb0 00:26:39.275 [2024-05-15 02:27:27.186780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.186824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.197107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e99d8 00:26:39.275 [2024-05-15 02:27:27.198533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.198575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.208403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190ebb98 00:26:39.275 [2024-05-15 02:27:27.209683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.209732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.220569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fc998 00:26:39.275 [2024-05-15 02:27:27.221800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.221846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.235608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e7818 00:26:39.275 [2024-05-15 02:27:27.237417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.237461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.247868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190edd58 00:26:39.275 [2024-05-15 02:27:27.249707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.249750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.259749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f3a28 00:26:39.275 [2024-05-15 02:27:27.261543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.261598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.268463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e5658 00:26:39.275 [2024-05-15 02:27:27.269277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.269316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.275 [2024-05-15 02:27:27.283254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fbcf0 00:26:39.275 [2024-05-15 02:27:27.284746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.275 [2024-05-15 02:27:27.284785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.533 [2024-05-15 02:27:27.294860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190ed4e8 00:26:39.533 [2024-05-15 02:27:27.296058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.533 [2024-05-15 02:27:27.296100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.533 [2024-05-15 02:27:27.306650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fa3a0 00:26:39.533 [2024-05-15 02:27:27.307664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.533 [2024-05-15 02:27:27.307706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:39.533 [2024-05-15 02:27:27.318068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e5ec8 00:26:39.533 [2024-05-15 02:27:27.318920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.533 [2024-05-15 02:27:27.318963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:39.533 [2024-05-15 02:27:27.330743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e5220 00:26:39.533 [2024-05-15 02:27:27.331950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.533 [2024-05-15 02:27:27.331997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:39.534 [2024-05-15 02:27:27.343376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190eb760 00:26:39.534 [2024-05-15 02:27:27.344705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.534 [2024-05-15 02:27:27.344749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.534 [2024-05-15 02:27:27.355611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190df550 00:26:39.534 [2024-05-15 02:27:27.356817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.534 [2024-05-15 02:27:27.356862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:39.534 [2024-05-15 02:27:27.370508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f6890 00:26:39.534 [2024-05-15 02:27:27.372362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.534 [2024-05-15 02:27:27.372414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:39.534 [2024-05-15 02:27:27.379334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e84c0 00:26:39.534 [2024-05-15 02:27:27.380395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.534 [2024-05-15 02:27:27.380435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:39.534 [2024-05-15 02:27:27.392177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fb048 00:26:39.534 [2024-05-15 02:27:27.393156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.534 [2024-05-15 02:27:27.393203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:39.534 [2024-05-15 02:27:27.406707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190eee38 00:26:39.534 [2024-05-15 02:27:27.407995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.534 [2024-05-15 02:27:27.408058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.534 [2024-05-15 02:27:27.418981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f8618 00:26:39.534 [2024-05-15 02:27:27.420294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.534 [2024-05-15 02:27:27.420340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:39.534 [2024-05-15 02:27:27.434152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e1b48 00:26:39.534 [2024-05-15 02:27:27.436029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.534 [2024-05-15 02:27:27.436070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.534 [2024-05-15 02:27:27.442913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e12d8 00:26:39.534 [2024-05-15 02:27:27.443867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.534 [2024-05-15 02:27:27.443910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:39.534 [2024-05-15 02:27:27.460367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f1868 00:26:39.534 [2024-05-15 02:27:27.461962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.534 [2024-05-15 02:27:27.462007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.534 [2024-05-15 02:27:27.476063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f1868 00:26:39.534 [2024-05-15 02:27:27.477642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.534 [2024-05-15 02:27:27.477683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.534 [2024-05-15 02:27:27.491907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f1868 00:26:39.534 [2024-05-15 02:27:27.493541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.534 [2024-05-15 02:27:27.493582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.534 [2024-05-15 02:27:27.508034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f1868 00:26:39.534 [2024-05-15 02:27:27.509634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.534 [2024-05-15 02:27:27.509673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.534 [2024-05-15 02:27:27.524161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f1868 00:26:39.534 [2024-05-15 02:27:27.525763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.534 [2024-05-15 02:27:27.525818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.534 [2024-05-15 02:27:27.539813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f1868 00:26:39.534 [2024-05-15 02:27:27.541285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.534 [2024-05-15 02:27:27.541330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 02:27:27.555070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f1868 00:26:39.792 [2024-05-15 02:27:27.556594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 02:27:27.556635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 02:27:27.570472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f1868 00:26:39.792 [2024-05-15 02:27:27.571975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 02:27:27.572014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 02:27:27.585910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f1868 00:26:39.793 [2024-05-15 02:27:27.587462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 02:27:27.587502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 02:27:27.601672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f1868 00:26:39.793 [2024-05-15 02:27:27.603243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 02:27:27.603313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 02:27:27.616827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f2d80 00:26:39.793 [2024-05-15 02:27:27.618364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 02:27:27.618418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 02:27:27.632375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f2d80 00:26:39.793 [2024-05-15 02:27:27.633925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 02:27:27.633970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 02:27:27.647615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f2d80 00:26:39.793 [2024-05-15 02:27:27.649156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 02:27:27.649217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 02:27:27.663212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f2d80 00:26:39.793 [2024-05-15 02:27:27.664783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 02:27:27.664835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 02:27:27.678717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f2d80 00:26:39.793 [2024-05-15 02:27:27.680270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 02:27:27.680328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 02:27:27.696172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f2d80 00:26:39.793 [2024-05-15 02:27:27.698586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 02:27:27.698628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 02:27:27.707583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e12d8 00:26:39.793 [2024-05-15 02:27:27.709120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 02:27:27.709163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 02:27:27.726212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e12d8 00:26:39.793 [2024-05-15 02:27:27.728602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 02:27:27.728663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 02:27:27.737547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190efae0 00:26:39.793 [2024-05-15 02:27:27.738692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 02:27:27.738748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 02:27:27.756540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e7818 00:26:39.793 [2024-05-15 02:27:27.758530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 02:27:27.758592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 02:27:27.771238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f1868 00:26:39.793 [2024-05-15 02:27:27.773170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 02:27:27.773235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 02:27:27.786893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fbcf0 00:26:39.793 [2024-05-15 02:27:27.788501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 02:27:27.788561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 02:27:27.806160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190ee190 00:26:40.051 [2024-05-15 02:27:27.808675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.051 [2024-05-15 02:27:27.808738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:40.051 [2024-05-15 02:27:27.816684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e0a68 00:26:40.051 [2024-05-15 02:27:27.817676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.051 [2024-05-15 02:27:27.817738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:40.051 [2024-05-15 02:27:27.836106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e2c28 00:26:40.051 [2024-05-15 02:27:27.838580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.051 [2024-05-15 02:27:27.838646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:40.051 [2024-05-15 02:27:27.846452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190ef270 00:26:40.051 [2024-05-15 02:27:27.847768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.051 [2024-05-15 02:27:27.847831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.051 [2024-05-15 02:27:27.865730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f0bc0 00:26:40.051 [2024-05-15 02:27:27.867849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.051 [2024-05-15 02:27:27.867917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:40.051 [2024-05-15 02:27:27.880641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e6b70 00:26:40.051 [2024-05-15 02:27:27.882619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.051 [2024-05-15 02:27:27.882681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:40.051 [2024-05-15 02:27:27.896365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fe720 00:26:40.051 [2024-05-15 02:27:27.898168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.051 [2024-05-15 02:27:27.898233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:40.051 [2024-05-15 02:27:27.915758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fb048 00:26:40.051 [2024-05-15 02:27:27.918428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.051 [2024-05-15 02:27:27.918489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.051 [2024-05-15 02:27:27.927161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e1f80 00:26:40.051 [2024-05-15 02:27:27.928549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.051 [2024-05-15 02:27:27.928611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:40.051 [2024-05-15 02:27:27.945730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190ee5c8 00:26:40.051 [2024-05-15 02:27:27.947888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.051 [2024-05-15 02:27:27.947952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.051 [2024-05-15 02:27:27.956870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fe2e8 00:26:40.051 [2024-05-15 02:27:27.957879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.051 [2024-05-15 02:27:27.957939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:40.051 [2024-05-15 02:27:27.976756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f7100 00:26:40.051 [2024-05-15 02:27:27.978981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.051 [2024-05-15 02:27:27.979055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:40.051 [2024-05-15 02:27:27.988136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fa3a0 00:26:40.051 [2024-05-15 02:27:27.989164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.051 [2024-05-15 02:27:27.989216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:40.051 [2024-05-15 02:27:28.004632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f9b30 00:26:40.051 [2024-05-15 02:27:28.005600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.051 [2024-05-15 02:27:28.005657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:40.051 [2024-05-15 02:27:28.024535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e88f8 00:26:40.051 [2024-05-15 02:27:28.026964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.051 [2024-05-15 02:27:28.027019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:40.051 [2024-05-15 02:27:28.037629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190de8a8 00:26:40.052 [2024-05-15 02:27:28.038886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.052 [2024-05-15 02:27:28.038942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:40.052 [2024-05-15 02:27:28.052615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f5be8 00:26:40.052 [2024-05-15 02:27:28.053568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.052 [2024-05-15 02:27:28.053615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:40.309 [2024-05-15 02:27:28.068842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190ebb98 00:26:40.309 [2024-05-15 02:27:28.070482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.309 [2024-05-15 02:27:28.070540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:40.309 [2024-05-15 02:27:28.086561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190ebb98 00:26:40.309 [2024-05-15 02:27:28.088965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.309 [2024-05-15 02:27:28.089013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:40.309 [2024-05-15 02:27:28.100026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190ddc00 00:26:40.309 [2024-05-15 02:27:28.101586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.101632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.111974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fb8b8 00:26:40.310 [2024-05-15 02:27:28.113306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.113346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.123696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f9f68 00:26:40.310 [2024-05-15 02:27:28.124749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.124787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.135519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fa3a0 00:26:40.310 [2024-05-15 02:27:28.136744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.136785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.147632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e1710 00:26:40.310 [2024-05-15 02:27:28.148841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.148878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.162223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f0788 00:26:40.310 [2024-05-15 02:27:28.164114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.164158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.170862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f3a28 00:26:40.310 [2024-05-15 02:27:28.171797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.171852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.185691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e01f8 00:26:40.310 [2024-05-15 02:27:28.187276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.187318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.196921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f7970 00:26:40.310 [2024-05-15 02:27:28.198373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.198426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.208743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fc128 00:26:40.310 [2024-05-15 02:27:28.210058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.210106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.223332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190de038 00:26:40.310 [2024-05-15 02:27:28.225306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.225347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.231969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e99d8 00:26:40.310 [2024-05-15 02:27:28.232971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.233005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.246639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e3d08 00:26:40.310 [2024-05-15 02:27:28.248341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.248398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.257985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190eb328 00:26:40.310 [2024-05-15 02:27:28.259431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.259470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.269747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fe720 00:26:40.310 [2024-05-15 02:27:28.270990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.271036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.281173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fbcf0 00:26:40.310 [2024-05-15 02:27:28.282247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.282289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.295644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e1b48 00:26:40.310 [2024-05-15 02:27:28.297569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.297613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.304266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f5378 00:26:40.310 [2024-05-15 02:27:28.305040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.305081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:40.310 [2024-05-15 02:27:28.319350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e38d0 00:26:40.310 [2024-05-15 02:27:28.321105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.310 [2024-05-15 02:27:28.321152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:40.568 [2024-05-15 02:27:28.330736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f6890 00:26:40.568 [2024-05-15 02:27:28.332303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.568 [2024-05-15 02:27:28.332350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:40.568 [2024-05-15 02:27:28.342146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fb8b8 00:26:40.568 [2024-05-15 02:27:28.343571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.568 [2024-05-15 02:27:28.343612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:40.568 [2024-05-15 02:27:28.353448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fd208 00:26:40.568 [2024-05-15 02:27:28.354698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.568 [2024-05-15 02:27:28.354739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:40.568 [2024-05-15 02:27:28.364807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190fd208 00:26:40.568 [2024-05-15 02:27:28.365924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.568 [2024-05-15 02:27:28.365963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:40.568 [2024-05-15 02:27:28.379509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190f31b8 00:26:40.568 [2024-05-15 02:27:28.381444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.568 [2024-05-15 02:27:28.381494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:40.568 [2024-05-15 02:27:28.388214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbef70) with pdu=0x2000190e5220 00:26:40.568 [2024-05-15 02:27:28.389165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.568 [2024-05-15 02:27:28.389205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:40.568 00:26:40.568 Latency(us) 00:26:40.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.568 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:40.568 nvme0n1 : 2.00 18584.15 72.59 0.00 0.00 6879.79 2532.07 20494.89 00:26:40.568 =================================================================================================================== 00:26:40.568 Total : 18584.15 72.59 0.00 0.00 6879.79 2532.07 20494.89 00:26:40.568 0 00:26:40.568 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:40.568 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:40.568 | .driver_specific 00:26:40.568 | .nvme_error 00:26:40.568 | .status_code 00:26:40.568 | .command_transient_transport_error' 00:26:40.568 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:40.568 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:40.826 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:26:40.826 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87699 00:26:40.826 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 87699 ']' 00:26:40.826 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 87699 00:26:40.826 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:26:40.826 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:40.826 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87699 00:26:40.826 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:40.826 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:40.826 killing process with pid 87699 00:26:40.826 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87699' 00:26:40.826 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 87699 00:26:40.826 Received shutdown signal, test time was about 2.000000 seconds 00:26:40.826 00:26:40.826 Latency(us) 00:26:40.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.826 =================================================================================================================== 00:26:40.826 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:40.826 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 87699 00:26:41.084 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:41.084 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:41.084 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:41.084 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:41.084 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:41.084 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87752 00:26:41.084 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:41.084 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87752 /var/tmp/bperf.sock 00:26:41.084 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 87752 ']' 00:26:41.084 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:41.084 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:41.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:41.084 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:41.084 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:41.084 02:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:41.084 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:41.084 Zero copy mechanism will not be used. 00:26:41.084 [2024-05-15 02:27:29.004714] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:26:41.084 [2024-05-15 02:27:29.004800] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87752 ] 00:26:41.341 [2024-05-15 02:27:29.135954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.341 [2024-05-15 02:27:29.222415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.321 02:27:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:42.321 02:27:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:26:42.321 02:27:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:42.321 02:27:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:42.578 02:27:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:42.578 02:27:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.578 02:27:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.578 02:27:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.578 02:27:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.578 02:27:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:43.154 nvme0n1 00:26:43.154 02:27:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:43.154 02:27:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.154 02:27:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:43.154 02:27:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.154 02:27:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:43.154 02:27:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:43.411 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:43.411 Zero copy mechanism will not be used. 00:26:43.411 Running I/O for 2 seconds... 00:26:43.411 [2024-05-15 02:27:31.259046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.259461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.259496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.265960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.266316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.266352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.272749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.273100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.273134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.279572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.279937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.279974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.288071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.288509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.288547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.295536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.295988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.296026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.301858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.302213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.302251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.307228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.307555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.307586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.312813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.313184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.313222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.318729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.319057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.319087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.324093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.324432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.324462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.329591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.329938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.329976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.334896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.335216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.335251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.340095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.340418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.340451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.345339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.345665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.345698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.350603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.350918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.350952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.356461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.356858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.356894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.363786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.364301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.364339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.370177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.370637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.370689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.377909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.378265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.378304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.384743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.385056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.385107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.392227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.392598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.392633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.399152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.399503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.411 [2024-05-15 02:27:31.399539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.411 [2024-05-15 02:27:31.406354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.411 [2024-05-15 02:27:31.406702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.412 [2024-05-15 02:27:31.406735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.412 [2024-05-15 02:27:31.413712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.412 [2024-05-15 02:27:31.414055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.412 [2024-05-15 02:27:31.414087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.412 [2024-05-15 02:27:31.420590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.412 [2024-05-15 02:27:31.420906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.412 [2024-05-15 02:27:31.420937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.427640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.427976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.428009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.434933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.435294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.435329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.442023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.442373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.442432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.449261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.449621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.449655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.456346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.456713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.456758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.463436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.463762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.463794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.470147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.470482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.470514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.476511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.476813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.476845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.483083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.483432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.483461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.488356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.488658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.488684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.493186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.493530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.493573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.499258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.499588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.499634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.504246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.504530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.504565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.510374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.510720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.510758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.516746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.517057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.517093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.522536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.522838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.681 [2024-05-15 02:27:31.522875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.681 [2024-05-15 02:27:31.527818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.681 [2024-05-15 02:27:31.528158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.528198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.534148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.534471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.534510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.539441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.539804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.539851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.546005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.546504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.546556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.551702] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.552083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.552124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.557726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.558081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.558119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.564727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.565031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.565070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.571336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.571713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.571760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.578560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.578977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.579028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.585242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.585540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.585573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.592266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.592578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.592607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.599000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.599290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.599323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.605627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.605932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.605965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.612403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.612726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.612764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.619555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.619911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.619949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.626477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.626806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.626842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.633667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.634057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.634101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.641068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.641457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.641494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.647956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.648307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.648343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.655136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.655467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.655504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.662146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.662463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.662499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.669143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.669493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.669548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.675720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.676048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.676101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.681885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.682190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.682230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.687026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.687273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.687312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.691671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.691898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.682 [2024-05-15 02:27:31.691941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.682 [2024-05-15 02:27:31.696277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.682 [2024-05-15 02:27:31.696516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.696549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.700914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.701150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.701185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.705570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.705778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.705820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.710259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.710489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.710516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.715003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.715214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.715240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.719673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.719890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.719918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.724277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.724517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.724544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.728870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.729074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.729099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.733478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.733690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.733715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.738114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.738315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.738339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.742737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.742957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.742983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.747338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.747573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.747598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.751890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.752104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.752130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.756643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.756858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.756884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.761235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.761467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.761492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.765744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.765965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.765991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.770402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.770628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.770655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.775231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.775467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.775495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.779960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.780179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.780207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.784596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.784809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.784837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.789205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.789459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.789490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.794134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.794356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.794398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.799167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.799420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.799449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.804023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.941 [2024-05-15 02:27:31.804247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.941 [2024-05-15 02:27:31.804291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.941 [2024-05-15 02:27:31.808964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.809197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.809231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.815542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.815814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.815853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.822312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.822598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.822633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.828874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.829127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.829154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.835261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.835519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.835548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.841593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.841850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.841877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.848034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.848272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.848298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.854867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.855116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.855149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.861717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.862010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.862039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.868361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.868614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.868648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.874990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.875237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.875263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.881494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.881740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.881770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.887970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.888247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.888275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.894565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.894821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.894848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.901194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.901481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.901511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.907952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.908226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.908255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.914109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.914327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.914355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.918962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.919170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.919194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.923778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.923984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.924009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.928488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.928751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.928789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.933174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.933398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.933426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.937929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.938136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.938162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.942506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.942711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.942738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.947109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.947312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.947337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.942 [2024-05-15 02:27:31.951689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:43.942 [2024-05-15 02:27:31.951902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.942 [2024-05-15 02:27:31.951927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.201 [2024-05-15 02:27:31.956298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.201 [2024-05-15 02:27:31.956536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.201 [2024-05-15 02:27:31.956572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.201 [2024-05-15 02:27:31.961280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.201 [2024-05-15 02:27:31.961517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.201 [2024-05-15 02:27:31.961547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.201 [2024-05-15 02:27:31.965897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.201 [2024-05-15 02:27:31.966114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.201 [2024-05-15 02:27:31.966142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.201 [2024-05-15 02:27:31.970483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.201 [2024-05-15 02:27:31.970695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.201 [2024-05-15 02:27:31.970729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.201 [2024-05-15 02:27:31.975042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.201 [2024-05-15 02:27:31.975250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.201 [2024-05-15 02:27:31.975277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.201 [2024-05-15 02:27:31.979717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.201 [2024-05-15 02:27:31.979951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.201 [2024-05-15 02:27:31.979985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.201 [2024-05-15 02:27:31.984290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.201 [2024-05-15 02:27:31.984534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:31.984568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:31.989003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:31.989218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:31.989245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:31.993660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:31.993896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:31.993926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:31.998246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:31.998472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:31.998501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.002916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.003139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.003165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.007630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.007860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.007889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.012289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.012521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.012550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.016871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.017087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.017114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.021564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.021779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.021823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.026150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.026366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.026406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.030726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.030944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.030969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.035302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.035541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.035568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.039981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.040199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.040225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.044569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.044771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.044797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.049195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.049415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.049441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.053965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.054172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.054198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.058564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.058782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.058808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.063249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.063482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.063508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.067866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.068083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.068109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.072465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.072693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.072720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.077104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.077308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.077334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.081734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.081971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.081996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.086331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.086551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.086577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.090980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.091184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.091209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.095819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.096027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.096054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.100568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.100792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.100817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.105141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.105357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.105381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.109736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.109948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.202 [2024-05-15 02:27:32.109973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.202 [2024-05-15 02:27:32.114322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.202 [2024-05-15 02:27:32.114535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.114559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.118948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.119163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.119186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.123569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.123787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.123813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.128178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.128405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.128430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.132768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.132969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.132994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.137306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.137523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.137548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.142255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.142514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.142541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.147703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.147935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.147964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.152550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.152781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.152815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.157253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.157491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.157524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.161891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.162110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.162138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.166525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.166740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.166765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.171111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.171324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.171351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.175687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.175890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.175915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.180225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.180448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.180475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.184867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.185082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.185116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.189466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.189688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.189720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.194350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.194598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.194635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.199024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.199264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.199298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.203674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.203890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.203921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.208235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.208457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.208489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.203 [2024-05-15 02:27:32.212819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.203 [2024-05-15 02:27:32.213043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.203 [2024-05-15 02:27:32.213075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.217556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.217777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.217818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.222263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.222501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.222531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.226794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.227007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.227037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.231371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.231605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.231637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.235966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.236175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.236205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.240574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.240795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.240827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.245191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.245414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.245440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.249831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.250054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.250080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.254382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.254599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.254623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.258934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.259137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.259162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.263550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.263756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.263782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.268114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.268320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.268346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.272803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.273016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.273042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.277372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.277598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.277625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.282010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.282221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.282253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.286625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.286833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.286863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.291236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.291460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.463 [2024-05-15 02:27:32.291489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.463 [2024-05-15 02:27:32.295855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.463 [2024-05-15 02:27:32.296056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.296085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.300418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.300635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.300665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.305051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.305252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.305281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.309631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.309853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.309882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.314186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.314402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.314431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.318780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.319001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.319031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.323475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.323686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.323715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.328087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.328292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.328323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.332743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.332951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.332982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.337312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.337533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.337565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.341913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.342190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.342221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.347715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.347942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.347975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.352439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.352666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.352699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.356972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.357187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.357223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.361540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.361763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.361813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.366223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.366449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.366477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.370765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.370990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.371016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.375312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.375550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.375581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.379946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.380161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.380192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.384580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.384807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.384852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.389194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.389425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.389460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.393858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.394067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.394100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.398375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.398600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.398630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.403033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.403236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.403267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.407627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.407829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.464 [2024-05-15 02:27:32.407858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.464 [2024-05-15 02:27:32.412185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.464 [2024-05-15 02:27:32.412413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-05-15 02:27:32.412444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.465 [2024-05-15 02:27:32.416753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.465 [2024-05-15 02:27:32.416956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-05-15 02:27:32.416986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.465 [2024-05-15 02:27:32.421355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.465 [2024-05-15 02:27:32.421576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-05-15 02:27:32.421605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.465 [2024-05-15 02:27:32.425989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.465 [2024-05-15 02:27:32.426196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-05-15 02:27:32.426226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.465 [2024-05-15 02:27:32.430612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.465 [2024-05-15 02:27:32.430816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-05-15 02:27:32.430846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.465 [2024-05-15 02:27:32.435189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.465 [2024-05-15 02:27:32.435405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-05-15 02:27:32.435436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.465 [2024-05-15 02:27:32.439713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.465 [2024-05-15 02:27:32.439918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-05-15 02:27:32.439948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.465 [2024-05-15 02:27:32.444284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.465 [2024-05-15 02:27:32.444499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-05-15 02:27:32.444528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.465 [2024-05-15 02:27:32.448837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.465 [2024-05-15 02:27:32.449046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-05-15 02:27:32.449078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.465 [2024-05-15 02:27:32.453415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.465 [2024-05-15 02:27:32.453617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-05-15 02:27:32.453649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.465 [2024-05-15 02:27:32.458040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.465 [2024-05-15 02:27:32.458241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-05-15 02:27:32.458271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.465 [2024-05-15 02:27:32.462601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.465 [2024-05-15 02:27:32.462799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-05-15 02:27:32.462829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.465 [2024-05-15 02:27:32.467095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.465 [2024-05-15 02:27:32.467297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-05-15 02:27:32.467328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.465 [2024-05-15 02:27:32.471659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.465 [2024-05-15 02:27:32.471859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-05-15 02:27:32.471889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.465 [2024-05-15 02:27:32.476231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.465 [2024-05-15 02:27:32.476458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-05-15 02:27:32.476491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.480834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.481041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.481073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.485442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.485643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.485674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.489968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.490172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.490203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.494581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.494778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.494807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.499176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.499379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.499433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.504142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.504347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.504378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.508754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.508958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.508989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.514269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.514492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.514518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.518965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.519183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.519212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.523658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.523874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.523903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.528287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.528509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.528540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.532994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.533193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.533221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.537512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.537712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.537741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.542103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.542310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.542343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.546679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.546887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.546921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.551180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.551380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.551421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.555749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.555953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.555983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.560280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.560507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.560537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.564811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.565010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.565040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.569375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.569600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.569628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.573859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.574059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.725 [2024-05-15 02:27:32.574088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.725 [2024-05-15 02:27:32.578417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.725 [2024-05-15 02:27:32.578619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.578648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.583490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.583742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.583772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.590010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.590263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.590302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.596255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.596516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.596549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.601971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.602187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.602224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.606771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.606959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.606990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.611441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.611648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.611679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.616118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.616309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.616338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.620768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.620957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.620989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.625311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.625512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.625543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.629913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.630111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.630144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.634448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.634805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.634850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.638971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.639171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.639206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.643513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.643716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.643749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.648126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.648320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.648354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.652736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.652930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.652963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.657255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.657480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.657514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.661861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.662049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.662081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.666470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.666659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.666692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.671020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.671213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.671245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.675609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.675814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.675849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.680181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.680381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.680426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.684775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.684987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.685022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.689252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.689490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.689527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.693913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.694113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.694147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.698487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.698697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.698731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.703036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.703233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.703262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.707652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.726 [2024-05-15 02:27:32.707845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.726 [2024-05-15 02:27:32.707879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.726 [2024-05-15 02:27:32.712096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.727 [2024-05-15 02:27:32.712288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.727 [2024-05-15 02:27:32.712322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.727 [2024-05-15 02:27:32.716733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.727 [2024-05-15 02:27:32.716951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.727 [2024-05-15 02:27:32.716986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.727 [2024-05-15 02:27:32.721577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.727 [2024-05-15 02:27:32.721814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.727 [2024-05-15 02:27:32.721849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.727 [2024-05-15 02:27:32.726379] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.727 [2024-05-15 02:27:32.726593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.727 [2024-05-15 02:27:32.726626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.727 [2024-05-15 02:27:32.730932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.727 [2024-05-15 02:27:32.731124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.727 [2024-05-15 02:27:32.731158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.727 [2024-05-15 02:27:32.735527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.727 [2024-05-15 02:27:32.735730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.727 [2024-05-15 02:27:32.735764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.740514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.740739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.740776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.745440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.745661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.745697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.750419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.750616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.750649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.755147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.755335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.755367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.759720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.759911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.759944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.764210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.764414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.764447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.768862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.769079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.769113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.773413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.773623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.773657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.777998] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.778196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.778230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.782574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.782761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.782792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.787179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.787411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.787444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.791764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.791955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.791987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.796408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.796596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.796627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.800935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.801142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.801175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.805526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.805725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.805757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.810056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.810258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.810291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.814655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.814868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.814900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.819140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.819326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.819359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.823737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.823927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.986 [2024-05-15 02:27:32.823958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.986 [2024-05-15 02:27:32.828271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.986 [2024-05-15 02:27:32.828498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.828531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.832813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.833004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.833036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.837352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.837572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.837604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.841879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.842092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.842125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.846525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.846726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.846760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.851077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.851266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.851300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.855731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.855939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.855971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.860210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.860477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.860512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.864843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.865091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.865137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.870823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.871072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.871110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.876420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.876610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.876645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.881045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.881256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.881290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.885830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.886042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.886075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.890508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.890702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.890742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.895110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.895462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.895516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.899797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.899988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.900022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.904417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.904611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.904646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.908947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.909274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.909333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.913453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.913700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.913743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.918091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.918338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.918400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.922622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.922845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.922888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.927414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.927667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.927708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.932063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.932295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.932339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.936612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.936845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.936888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.941179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.941439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.941486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.945838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.946106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.946151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.950501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.950760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.950803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.955214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.987 [2024-05-15 02:27:32.955477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.987 [2024-05-15 02:27:32.955520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.987 [2024-05-15 02:27:32.959948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.988 [2024-05-15 02:27:32.960195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.988 [2024-05-15 02:27:32.960241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.988 [2024-05-15 02:27:32.964664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.988 [2024-05-15 02:27:32.964932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.988 [2024-05-15 02:27:32.964980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.988 [2024-05-15 02:27:32.970949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.988 [2024-05-15 02:27:32.971185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.988 [2024-05-15 02:27:32.971233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.988 [2024-05-15 02:27:32.975824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.988 [2024-05-15 02:27:32.976034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.988 [2024-05-15 02:27:32.976073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.988 [2024-05-15 02:27:32.980529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.988 [2024-05-15 02:27:32.980731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.988 [2024-05-15 02:27:32.980762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.988 [2024-05-15 02:27:32.985320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.988 [2024-05-15 02:27:32.985545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.988 [2024-05-15 02:27:32.985582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.988 [2024-05-15 02:27:32.990080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.988 [2024-05-15 02:27:32.990270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.988 [2024-05-15 02:27:32.990307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.988 [2024-05-15 02:27:32.994673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.988 [2024-05-15 02:27:32.994877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.988 [2024-05-15 02:27:32.994913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.988 [2024-05-15 02:27:32.999300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:44.988 [2024-05-15 02:27:32.999534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.988 [2024-05-15 02:27:32.999574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.247 [2024-05-15 02:27:33.004045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.247 [2024-05-15 02:27:33.004265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.247 [2024-05-15 02:27:33.004297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.247 [2024-05-15 02:27:33.009757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.247 [2024-05-15 02:27:33.009981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.010012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.014662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.014869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.014900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.019548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.019777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.019810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.024560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.024760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.024793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.029310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.029512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.029541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.034043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.034250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.034278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.038775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.038958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.038992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.043433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.043621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.043649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.048894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.049128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.049160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.053597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.053813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.053856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.058915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.059110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.059156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.063954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.064144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.064173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.068597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.068781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.068807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.074093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.074444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.074490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.079089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.079180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.079211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.083857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.083946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.083981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.088542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.088643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.088668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.093202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.093293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.093320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.097969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.098053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.098079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.102616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.102697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.102724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.107230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.107315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.107342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.111843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.111930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.111956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.116417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.116493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.116518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.120964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.121073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.121099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.125541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.125622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.125651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.130122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.130228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.130256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.134723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.134806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.134834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.248 [2024-05-15 02:27:33.139323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.248 [2024-05-15 02:27:33.139437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.248 [2024-05-15 02:27:33.139466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.143974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.144088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.144117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.148624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.148723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.148752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.153143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.153249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.153279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.158123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.158213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.158242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.163014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.163125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.163161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.168153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.168267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.168301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.172939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.173047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.173082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.177883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.177982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.178017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.182612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.182718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.182753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.187248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.187429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.187473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.192053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.192212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.192266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.196716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.196825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.196858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.201317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.201415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.201448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.206446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.206533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.206569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.211132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.211240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.211273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.215864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.215945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.215978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.220582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.220686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.220719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.225257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.225339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.225372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.230042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.230153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.230186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.234828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.234971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.235013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.239740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.239829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.239863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.249 [2024-05-15 02:27:33.244354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1df8430) with pdu=0x2000190fef90 00:26:45.249 [2024-05-15 02:27:33.244460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.249 [2024-05-15 02:27:33.244495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.249 00:26:45.249 Latency(us) 00:26:45.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.249 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:45.249 nvme0n1 : 2.00 6071.54 758.94 0.00 0.00 2628.32 1839.48 13166.78 00:26:45.249 =================================================================================================================== 00:26:45.249 Total : 6071.54 758.94 0.00 0.00 2628.32 1839.48 13166.78 00:26:45.249 0 00:26:45.507 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:45.507 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:45.507 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:45.507 | .driver_specific 00:26:45.507 | .nvme_error 00:26:45.507 | .status_code 00:26:45.507 | .command_transient_transport_error' 00:26:45.507 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:45.507 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 392 > 0 )) 00:26:45.507 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87752 00:26:45.507 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 87752 ']' 00:26:45.507 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 87752 00:26:45.507 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:26:45.507 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:45.507 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87752 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:45.765 killing process with pid 87752 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87752' 00:26:45.765 Received shutdown signal, test time was about 2.000000 seconds 00:26:45.765 00:26:45.765 Latency(us) 00:26:45.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.765 =================================================================================================================== 00:26:45.765 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 87752 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 87752 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 87547 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 87547 ']' 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 87547 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87547 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:45.765 killing process with pid 87547 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87547' 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 87547 00:26:45.765 [2024-05-15 02:27:33.747107] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:45.765 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 87547 00:26:46.022 00:26:46.022 real 0m18.747s 00:26:46.022 user 0m37.215s 00:26:46.022 sys 0m4.459s 00:26:46.022 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:46.022 02:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.022 ************************************ 00:26:46.022 END TEST nvmf_digest_error 00:26:46.022 ************************************ 00:26:46.022 02:27:33 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:46.022 02:27:33 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:46.022 02:27:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:46.022 02:27:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:46.022 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:46.022 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:46.022 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:46.022 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:46.022 rmmod nvme_tcp 00:26:46.022 rmmod nvme_fabrics 00:26:46.280 rmmod nvme_keyring 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 87547 ']' 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 87547 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 87547 ']' 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 87547 00:26:46.280 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (87547) - No such process 00:26:46.280 Process with pid 87547 is not found 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 87547 is not found' 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:46.280 ************************************ 00:26:46.280 END TEST nvmf_digest 00:26:46.280 ************************************ 00:26:46.280 00:26:46.280 real 0m37.791s 00:26:46.280 user 1m13.353s 00:26:46.280 sys 0m9.244s 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:46.280 02:27:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:46.280 02:27:34 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:26:46.280 02:27:34 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:26:46.280 02:27:34 nvmf_tcp -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:26:46.280 02:27:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:46.280 02:27:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:46.280 02:27:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:46.280 ************************************ 00:26:46.280 START TEST nvmf_mdns_discovery 00:26:46.280 ************************************ 00:26:46.280 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:26:46.280 * Looking for test storage... 00:26:46.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:46.280 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:46.280 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:46.280 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.280 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.280 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.280 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.280 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.280 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:46.281 Cannot find device "nvmf_tgt_br" 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:46.281 Cannot find device "nvmf_tgt_br2" 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:46.281 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:46.539 Cannot find device "nvmf_tgt_br" 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:46.539 Cannot find device "nvmf_tgt_br2" 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:46.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:46.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:46.539 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:46.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:46.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:26:46.796 00:26:46.796 --- 10.0.0.2 ping statistics --- 00:26:46.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.796 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:46.796 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:46.796 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:26:46.796 00:26:46.796 --- 10.0.0.3 ping statistics --- 00:26:46.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.796 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:46.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:46.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:26:46.796 00:26:46.796 --- 10.0.0.1 ping statistics --- 00:26:46.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.796 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=88022 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 88022 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 88022 ']' 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:46.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:46.796 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.796 [2024-05-15 02:27:34.668123] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:26:46.796 [2024-05-15 02:27:34.668243] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.053 [2024-05-15 02:27:34.812595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.053 [2024-05-15 02:27:34.870852] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.053 [2024-05-15 02:27:34.870908] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.053 [2024-05-15 02:27:34.870920] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.053 [2024-05-15 02:27:34.870928] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.053 [2024-05-15 02:27:34.870936] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.053 [2024-05-15 02:27:34.870962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.053 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:47.053 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:26:47.053 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:47.053 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:47.053 02:27:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.053 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.053 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:26:47.053 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.053 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.053 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.053 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:26:47.053 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.053 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 [2024-05-15 02:27:35.102531] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 [2024-05-15 02:27:35.110458] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:47.311 [2024-05-15 02:27:35.110681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 null0 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 null1 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 null2 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 null3 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=88053 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 88053 /tmp/host.sock 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 88053 ']' 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:47.311 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:47.311 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:47.312 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:47.312 02:27:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.312 [2024-05-15 02:27:35.208137] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:26:47.312 [2024-05-15 02:27:35.208711] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88053 ] 00:26:47.570 [2024-05-15 02:27:35.341689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.570 [2024-05-15 02:27:35.411524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.515 02:27:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:48.515 02:27:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:26:48.515 02:27:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:26:48.515 02:27:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:26:48.515 02:27:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:26:48.515 02:27:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=88076 00:26:48.515 02:27:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:26:48.515 02:27:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:26:48.515 02:27:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:26:48.515 Process 1003 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:26:48.515 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:26:48.515 Successfully dropped root privileges. 00:26:48.515 avahi-daemon 0.8 starting up. 00:26:48.515 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:26:48.515 Successfully called chroot(). 00:26:48.515 Successfully dropped remaining capabilities. 00:26:49.448 No service file found in /etc/avahi/services. 00:26:49.448 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:26:49.448 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:26:49.448 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:26:49.448 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:26:49.448 Network interface enumeration completed. 00:26:49.448 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:26:49.448 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:26:49.448 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:26:49.448 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:26:49.448 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 2853409154. 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:49.448 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:49.706 [2024-05-15 02:27:37.701720] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:26:49.706 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.964 [2024-05-15 02:27:37.743377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.964 [2024-05-15 02:27:37.783351] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.964 [2024-05-15 02:27:37.791332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.964 02:27:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:26:50.897 [2024-05-15 02:27:38.601728] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:26:51.463 [2024-05-15 02:27:39.201752] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:26:51.463 [2024-05-15 02:27:39.201799] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:26:51.463 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:51.463 cookie is 0 00:26:51.463 is_local: 1 00:26:51.463 our_own: 0 00:26:51.463 wide_area: 0 00:26:51.463 multicast: 1 00:26:51.463 cached: 1 00:26:51.463 [2024-05-15 02:27:39.301745] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:26:51.463 [2024-05-15 02:27:39.301798] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:26:51.463 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:51.463 cookie is 0 00:26:51.463 is_local: 1 00:26:51.463 our_own: 0 00:26:51.463 wide_area: 0 00:26:51.463 multicast: 1 00:26:51.463 cached: 1 00:26:51.463 [2024-05-15 02:27:39.301823] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:26:51.463 [2024-05-15 02:27:39.401743] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:26:51.463 [2024-05-15 02:27:39.401790] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:26:51.463 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:51.463 cookie is 0 00:26:51.463 is_local: 1 00:26:51.463 our_own: 0 00:26:51.463 wide_area: 0 00:26:51.463 multicast: 1 00:26:51.463 cached: 1 00:26:51.722 [2024-05-15 02:27:39.501744] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:26:51.722 [2024-05-15 02:27:39.501793] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:26:51.722 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:51.722 cookie is 0 00:26:51.722 is_local: 1 00:26:51.722 our_own: 0 00:26:51.722 wide_area: 0 00:26:51.722 multicast: 1 00:26:51.722 cached: 1 00:26:51.722 [2024-05-15 02:27:39.501831] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:26:52.288 [2024-05-15 02:27:40.206100] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:26:52.288 [2024-05-15 02:27:40.206152] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:26:52.288 [2024-05-15 02:27:40.206174] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:52.288 [2024-05-15 02:27:40.292322] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:26:52.546 [2024-05-15 02:27:40.349554] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:26:52.546 [2024-05-15 02:27:40.349619] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:26:52.546 [2024-05-15 02:27:40.406039] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:52.546 [2024-05-15 02:27:40.406087] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:52.546 [2024-05-15 02:27:40.406109] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:52.546 [2024-05-15 02:27:40.492217] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:26:52.546 [2024-05-15 02:27:40.547877] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:26:52.546 [2024-05-15 02:27:40.547951] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:55.112 02:27:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:26:55.113 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.370 02:27:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.326 [2024-05-15 02:27:44.314378] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:56.326 [2024-05-15 02:27:44.315628] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:56.326 [2024-05-15 02:27:44.315669] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:56.326 [2024-05-15 02:27:44.315707] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:56.326 [2024-05-15 02:27:44.315722] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.326 [2024-05-15 02:27:44.322420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:56.326 [2024-05-15 02:27:44.323629] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:56.326 [2024-05-15 02:27:44.323877] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.326 02:27:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:26:56.584 [2024-05-15 02:27:44.453751] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:26:56.584 [2024-05-15 02:27:44.454722] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:26:56.584 [2024-05-15 02:27:44.511091] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:26:56.584 [2024-05-15 02:27:44.511136] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:56.584 [2024-05-15 02:27:44.511144] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:56.584 [2024-05-15 02:27:44.511163] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:56.584 [2024-05-15 02:27:44.518936] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:26:56.584 [2024-05-15 02:27:44.518960] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:26:56.584 [2024-05-15 02:27:44.518967] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:26:56.584 [2024-05-15 02:27:44.518984] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:56.584 [2024-05-15 02:27:44.556876] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:56.584 [2024-05-15 02:27:44.556926] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:56.584 [2024-05-15 02:27:44.564868] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:26:56.584 [2024-05-15 02:27:44.564906] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.517 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.518 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.778 [2024-05-15 02:27:45.615498] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:57.778 [2024-05-15 02:27:45.615762] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:57.778 [2024-05-15 02:27:45.615947] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:57.778 [2024-05-15 02:27:45.616094] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:57.778 [2024-05-15 02:27:45.617721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.778 [2024-05-15 02:27:45.617766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.778 [2024-05-15 02:27:45.617785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.778 [2024-05-15 02:27:45.617803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.778 [2024-05-15 02:27:45.617819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.778 [2024-05-15 02:27:45.617847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.778 [2024-05-15 02:27:45.617861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.778 [2024-05-15 02:27:45.617876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.778 [2024-05-15 02:27:45.617886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ace520 is same with the state(5) to be set 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.778 [2024-05-15 02:27:45.623493] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:57.778 [2024-05-15 02:27:45.623707] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.778 [2024-05-15 02:27:45.627674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace520 (9): Bad file descriptor 00:26:57.778 02:27:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:26:57.778 [2024-05-15 02:27:45.630734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.778 [2024-05-15 02:27:45.630921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.778 [2024-05-15 02:27:45.631092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.778 [2024-05-15 02:27:45.631108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.778 [2024-05-15 02:27:45.631118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.778 [2024-05-15 02:27:45.631129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.778 [2024-05-15 02:27:45.631140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.778 [2024-05-15 02:27:45.631149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.778 [2024-05-15 02:27:45.631158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25ca0 is same with the state(5) to be set 00:26:57.778 [2024-05-15 02:27:45.637708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.778 [2024-05-15 02:27:45.637921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.778 [2024-05-15 02:27:45.637991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.778 [2024-05-15 02:27:45.638009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ace520 with addr=10.0.0.2, port=4420 00:26:57.778 [2024-05-15 02:27:45.638021] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ace520 is same with the state(5) to be set 00:26:57.778 [2024-05-15 02:27:45.638044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace520 (9): Bad file descriptor 00:26:57.778 [2024-05-15 02:27:45.638061] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.778 [2024-05-15 02:27:45.638071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.778 [2024-05-15 02:27:45.638083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.778 [2024-05-15 02:27:45.638100] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.778 [2024-05-15 02:27:45.640661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25ca0 (9): Bad file descriptor 00:26:57.778 [2024-05-15 02:27:45.647813] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.778 [2024-05-15 02:27:45.647951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.778 [2024-05-15 02:27:45.648003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.778 [2024-05-15 02:27:45.648019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ace520 with addr=10.0.0.2, port=4420 00:26:57.778 [2024-05-15 02:27:45.648031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ace520 is same with the state(5) to be set 00:26:57.778 [2024-05-15 02:27:45.648050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace520 (9): Bad file descriptor 00:26:57.778 [2024-05-15 02:27:45.648066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.778 [2024-05-15 02:27:45.648076] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.778 [2024-05-15 02:27:45.648087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.778 [2024-05-15 02:27:45.648103] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.778 [2024-05-15 02:27:45.650674] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:57.778 [2024-05-15 02:27:45.650772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.778 [2024-05-15 02:27:45.650821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.778 [2024-05-15 02:27:45.650838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b25ca0 with addr=10.0.0.3, port=4420 00:26:57.778 [2024-05-15 02:27:45.650849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25ca0 is same with the state(5) to be set 00:26:57.778 [2024-05-15 02:27:45.650866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25ca0 (9): Bad file descriptor 00:26:57.778 [2024-05-15 02:27:45.650881] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:57.778 [2024-05-15 02:27:45.650891] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:57.778 [2024-05-15 02:27:45.650900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:57.778 [2024-05-15 02:27:45.650916] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.778 [2024-05-15 02:27:45.657885] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.778 [2024-05-15 02:27:45.657982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.778 [2024-05-15 02:27:45.658029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.778 [2024-05-15 02:27:45.658046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ace520 with addr=10.0.0.2, port=4420 00:26:57.778 [2024-05-15 02:27:45.658057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ace520 is same with the state(5) to be set 00:26:57.778 [2024-05-15 02:27:45.658074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace520 (9): Bad file descriptor 00:26:57.778 [2024-05-15 02:27:45.658089] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.778 [2024-05-15 02:27:45.658099] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.778 [2024-05-15 02:27:45.658108] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.778 [2024-05-15 02:27:45.658123] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.778 [2024-05-15 02:27:45.660735] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:57.778 [2024-05-15 02:27:45.660822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.778 [2024-05-15 02:27:45.660870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.778 [2024-05-15 02:27:45.660886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b25ca0 with addr=10.0.0.3, port=4420 00:26:57.778 [2024-05-15 02:27:45.660896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25ca0 is same with the state(5) to be set 00:26:57.779 [2024-05-15 02:27:45.660913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25ca0 (9): Bad file descriptor 00:26:57.779 [2024-05-15 02:27:45.660927] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:57.779 [2024-05-15 02:27:45.660936] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:57.779 [2024-05-15 02:27:45.660945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:57.779 [2024-05-15 02:27:45.660960] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.779 [2024-05-15 02:27:45.667946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.779 [2024-05-15 02:27:45.668046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.668095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.668111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ace520 with addr=10.0.0.2, port=4420 00:26:57.779 [2024-05-15 02:27:45.668122] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ace520 is same with the state(5) to be set 00:26:57.779 [2024-05-15 02:27:45.668139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace520 (9): Bad file descriptor 00:26:57.779 [2024-05-15 02:27:45.668154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.779 [2024-05-15 02:27:45.668164] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.779 [2024-05-15 02:27:45.668173] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.779 [2024-05-15 02:27:45.668188] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.779 [2024-05-15 02:27:45.670794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:57.779 [2024-05-15 02:27:45.670901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.670951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.670968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b25ca0 with addr=10.0.0.3, port=4420 00:26:57.779 [2024-05-15 02:27:45.670979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25ca0 is same with the state(5) to be set 00:26:57.779 [2024-05-15 02:27:45.670997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25ca0 (9): Bad file descriptor 00:26:57.779 [2024-05-15 02:27:45.671012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:57.779 [2024-05-15 02:27:45.671022] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:57.779 [2024-05-15 02:27:45.671031] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:57.779 [2024-05-15 02:27:45.671047] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.779 [2024-05-15 02:27:45.678017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.779 [2024-05-15 02:27:45.678300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.678538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.678680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ace520 with addr=10.0.0.2, port=4420 00:26:57.779 [2024-05-15 02:27:45.678795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ace520 is same with the state(5) to be set 00:26:57.779 [2024-05-15 02:27:45.678822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace520 (9): Bad file descriptor 00:26:57.779 [2024-05-15 02:27:45.678859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.779 [2024-05-15 02:27:45.678871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.779 [2024-05-15 02:27:45.678881] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.779 [2024-05-15 02:27:45.678898] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.779 [2024-05-15 02:27:45.680861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:57.779 [2024-05-15 02:27:45.681093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.681270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.681340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b25ca0 with addr=10.0.0.3, port=4420 00:26:57.779 [2024-05-15 02:27:45.681549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25ca0 is same with the state(5) to be set 00:26:57.779 [2024-05-15 02:27:45.681625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25ca0 (9): Bad file descriptor 00:26:57.779 [2024-05-15 02:27:45.681781] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:57.779 [2024-05-15 02:27:45.681807] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:57.779 [2024-05-15 02:27:45.681834] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:57.779 [2024-05-15 02:27:45.681860] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.779 [2024-05-15 02:27:45.688247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.779 [2024-05-15 02:27:45.688501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.688672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.688748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ace520 with addr=10.0.0.2, port=4420 00:26:57.779 [2024-05-15 02:27:45.688941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ace520 is same with the state(5) to be set 00:26:57.779 [2024-05-15 02:27:45.689101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace520 (9): Bad file descriptor 00:26:57.779 [2024-05-15 02:27:45.689206] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.779 [2024-05-15 02:27:45.689324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.779 [2024-05-15 02:27:45.689466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.779 [2024-05-15 02:27:45.689518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.779 [2024-05-15 02:27:45.691050] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:57.779 [2024-05-15 02:27:45.691293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.691517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.691648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b25ca0 with addr=10.0.0.3, port=4420 00:26:57.779 [2024-05-15 02:27:45.691841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25ca0 is same with the state(5) to be set 00:26:57.779 [2024-05-15 02:27:45.692021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25ca0 (9): Bad file descriptor 00:26:57.779 [2024-05-15 02:27:45.692188] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:57.779 [2024-05-15 02:27:45.692204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:57.779 [2024-05-15 02:27:45.692214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:57.779 [2024-05-15 02:27:45.692231] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.779 [2024-05-15 02:27:45.698458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.779 [2024-05-15 02:27:45.698559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.698609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.698626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ace520 with addr=10.0.0.2, port=4420 00:26:57.779 [2024-05-15 02:27:45.698637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ace520 is same with the state(5) to be set 00:26:57.779 [2024-05-15 02:27:45.698655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace520 (9): Bad file descriptor 00:26:57.779 [2024-05-15 02:27:45.698670] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.779 [2024-05-15 02:27:45.698680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.779 [2024-05-15 02:27:45.698689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.779 [2024-05-15 02:27:45.698704] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.779 [2024-05-15 02:27:45.701249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:57.779 [2024-05-15 02:27:45.701335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.701401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.701420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b25ca0 with addr=10.0.0.3, port=4420 00:26:57.779 [2024-05-15 02:27:45.701431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25ca0 is same with the state(5) to be set 00:26:57.779 [2024-05-15 02:27:45.701468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25ca0 (9): Bad file descriptor 00:26:57.779 [2024-05-15 02:27:45.701485] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:57.779 [2024-05-15 02:27:45.701495] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:57.779 [2024-05-15 02:27:45.701505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:57.779 [2024-05-15 02:27:45.701520] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.779 [2024-05-15 02:27:45.708522] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.779 [2024-05-15 02:27:45.708617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.708665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.779 [2024-05-15 02:27:45.708682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ace520 with addr=10.0.0.2, port=4420 00:26:57.779 [2024-05-15 02:27:45.708692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ace520 is same with the state(5) to be set 00:26:57.779 [2024-05-15 02:27:45.708709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace520 (9): Bad file descriptor 00:26:57.779 [2024-05-15 02:27:45.708724] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.779 [2024-05-15 02:27:45.708734] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.779 [2024-05-15 02:27:45.708743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.779 [2024-05-15 02:27:45.708758] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.780 [2024-05-15 02:27:45.711305] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:57.780 [2024-05-15 02:27:45.711487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.711550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.711568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b25ca0 with addr=10.0.0.3, port=4420 00:26:57.780 [2024-05-15 02:27:45.711579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25ca0 is same with the state(5) to be set 00:26:57.780 [2024-05-15 02:27:45.711597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25ca0 (9): Bad file descriptor 00:26:57.780 [2024-05-15 02:27:45.711612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:57.780 [2024-05-15 02:27:45.711621] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:57.780 [2024-05-15 02:27:45.711630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:57.780 [2024-05-15 02:27:45.711646] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.780 [2024-05-15 02:27:45.718585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.780 [2024-05-15 02:27:45.718686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.718736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.718752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ace520 with addr=10.0.0.2, port=4420 00:26:57.780 [2024-05-15 02:27:45.718764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ace520 is same with the state(5) to be set 00:26:57.780 [2024-05-15 02:27:45.718781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace520 (9): Bad file descriptor 00:26:57.780 [2024-05-15 02:27:45.718796] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.780 [2024-05-15 02:27:45.718806] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.780 [2024-05-15 02:27:45.718815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.780 [2024-05-15 02:27:45.718830] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.780 [2024-05-15 02:27:45.721365] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:57.780 [2024-05-15 02:27:45.721456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.721504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.721520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b25ca0 with addr=10.0.0.3, port=4420 00:26:57.780 [2024-05-15 02:27:45.721531] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25ca0 is same with the state(5) to be set 00:26:57.780 [2024-05-15 02:27:45.721548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25ca0 (9): Bad file descriptor 00:26:57.780 [2024-05-15 02:27:45.721562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:57.780 [2024-05-15 02:27:45.721572] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:57.780 [2024-05-15 02:27:45.721581] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:57.780 [2024-05-15 02:27:45.721596] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.780 [2024-05-15 02:27:45.728651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.780 [2024-05-15 02:27:45.728749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.728799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.728815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ace520 with addr=10.0.0.2, port=4420 00:26:57.780 [2024-05-15 02:27:45.728826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ace520 is same with the state(5) to be set 00:26:57.780 [2024-05-15 02:27:45.728843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace520 (9): Bad file descriptor 00:26:57.780 [2024-05-15 02:27:45.728858] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.780 [2024-05-15 02:27:45.728867] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.780 [2024-05-15 02:27:45.728876] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.780 [2024-05-15 02:27:45.728891] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.780 [2024-05-15 02:27:45.731424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:57.780 [2024-05-15 02:27:45.731517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.731566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.731582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b25ca0 with addr=10.0.0.3, port=4420 00:26:57.780 [2024-05-15 02:27:45.731593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25ca0 is same with the state(5) to be set 00:26:57.780 [2024-05-15 02:27:45.731609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25ca0 (9): Bad file descriptor 00:26:57.780 [2024-05-15 02:27:45.731624] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:57.780 [2024-05-15 02:27:45.731634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:57.780 [2024-05-15 02:27:45.731643] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:57.780 [2024-05-15 02:27:45.731658] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.780 [2024-05-15 02:27:45.738714] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.780 [2024-05-15 02:27:45.738811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.738860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.738876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ace520 with addr=10.0.0.2, port=4420 00:26:57.780 [2024-05-15 02:27:45.738887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ace520 is same with the state(5) to be set 00:26:57.780 [2024-05-15 02:27:45.738903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace520 (9): Bad file descriptor 00:26:57.780 [2024-05-15 02:27:45.738918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.780 [2024-05-15 02:27:45.738927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.780 [2024-05-15 02:27:45.738937] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.780 [2024-05-15 02:27:45.738952] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.780 [2024-05-15 02:27:45.741482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:57.780 [2024-05-15 02:27:45.741567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.741615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.741631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b25ca0 with addr=10.0.0.3, port=4420 00:26:57.780 [2024-05-15 02:27:45.741642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25ca0 is same with the state(5) to be set 00:26:57.780 [2024-05-15 02:27:45.741658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25ca0 (9): Bad file descriptor 00:26:57.780 [2024-05-15 02:27:45.741673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:57.780 [2024-05-15 02:27:45.741682] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:57.780 [2024-05-15 02:27:45.741691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:57.780 [2024-05-15 02:27:45.741706] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.780 [2024-05-15 02:27:45.748778] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.780 [2024-05-15 02:27:45.748873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.748920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.748937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ace520 with addr=10.0.0.2, port=4420 00:26:57.780 [2024-05-15 02:27:45.748947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ace520 is same with the state(5) to be set 00:26:57.780 [2024-05-15 02:27:45.748964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace520 (9): Bad file descriptor 00:26:57.780 [2024-05-15 02:27:45.748979] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.780 [2024-05-15 02:27:45.748988] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.780 [2024-05-15 02:27:45.748998] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.780 [2024-05-15 02:27:45.749013] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.780 [2024-05-15 02:27:45.751537] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:57.780 [2024-05-15 02:27:45.751638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.751688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.780 [2024-05-15 02:27:45.751704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b25ca0 with addr=10.0.0.3, port=4420 00:26:57.780 [2024-05-15 02:27:45.751715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25ca0 is same with the state(5) to be set 00:26:57.780 [2024-05-15 02:27:45.751732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25ca0 (9): Bad file descriptor 00:26:57.780 [2024-05-15 02:27:45.751748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:57.780 [2024-05-15 02:27:45.751757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:57.780 [2024-05-15 02:27:45.751766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:57.781 [2024-05-15 02:27:45.751781] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.781 [2024-05-15 02:27:45.755399] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:57.781 [2024-05-15 02:27:45.755432] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:57.781 [2024-05-15 02:27:45.755479] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:57.781 [2024-05-15 02:27:45.756365] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:26:57.781 [2024-05-15 02:27:45.756405] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:26:57.781 [2024-05-15 02:27:45.756426] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:58.038 [2024-05-15 02:27:45.843497] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:26:58.038 [2024-05-15 02:27:45.843579] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:26:58.972 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.973 02:27:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:26:59.231 [2024-05-15 02:27:47.002163] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:00.165 02:27:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:27:00.165 02:27:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:00.165 02:27:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.165 02:27:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.165 02:27:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:00.165 02:27:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:00.165 02:27:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:00.165 02:27:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.165 02:27:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:27:00.165 02:27:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.165 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.165 [2024-05-15 02:27:48.179924] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:27:00.423 2024/05/15 02:27:48 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:00.423 request: 00:27:00.423 { 00:27:00.423 "method": "bdev_nvme_start_mdns_discovery", 00:27:00.423 "params": { 00:27:00.423 "name": "mdns", 00:27:00.423 "svcname": "_nvme-disc._http", 00:27:00.423 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:00.423 } 00:27:00.423 } 00:27:00.423 Got JSON-RPC error response 00:27:00.423 GoRPCClient: error on JSON-RPC call 00:27:00.423 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:00.423 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:00.424 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:00.424 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:00.424 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:00.424 02:27:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:27:00.988 [2024-05-15 02:27:48.768496] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:00.988 [2024-05-15 02:27:48.868496] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:00.988 [2024-05-15 02:27:48.968525] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:00.988 [2024-05-15 02:27:48.968588] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:27:00.988 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:00.988 cookie is 0 00:27:00.988 is_local: 1 00:27:00.988 our_own: 0 00:27:00.988 wide_area: 0 00:27:00.988 multicast: 1 00:27:00.988 cached: 1 00:27:01.245 [2024-05-15 02:27:49.068516] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:01.245 [2024-05-15 02:27:49.068568] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:27:01.245 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:01.245 cookie is 0 00:27:01.245 is_local: 1 00:27:01.245 our_own: 0 00:27:01.245 wide_area: 0 00:27:01.245 multicast: 1 00:27:01.245 cached: 1 00:27:01.245 [2024-05-15 02:27:49.068585] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:27:01.245 [2024-05-15 02:27:49.168515] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:01.245 [2024-05-15 02:27:49.168563] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:27:01.245 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:01.245 cookie is 0 00:27:01.245 is_local: 1 00:27:01.245 our_own: 0 00:27:01.245 wide_area: 0 00:27:01.245 multicast: 1 00:27:01.245 cached: 1 00:27:01.563 [2024-05-15 02:27:49.268514] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:01.563 [2024-05-15 02:27:49.268561] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:27:01.563 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:01.563 cookie is 0 00:27:01.563 is_local: 1 00:27:01.563 our_own: 0 00:27:01.563 wide_area: 0 00:27:01.563 multicast: 1 00:27:01.563 cached: 1 00:27:01.563 [2024-05-15 02:27:49.268577] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:27:02.129 [2024-05-15 02:27:49.977872] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:02.129 [2024-05-15 02:27:49.977924] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:02.129 [2024-05-15 02:27:49.977944] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:02.129 [2024-05-15 02:27:50.064025] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:27:02.129 [2024-05-15 02:27:50.123798] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:02.129 [2024-05-15 02:27:50.123849] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:02.387 [2024-05-15 02:27:50.178107] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:02.387 [2024-05-15 02:27:50.178166] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:02.387 [2024-05-15 02:27:50.178201] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:02.387 [2024-05-15 02:27:50.264288] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:27:02.387 [2024-05-15 02:27:50.324255] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:02.387 [2024-05-15 02:27:50.324324] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.665 [2024-05-15 02:27:53.388003] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:27:05.665 request: 00:27:05.665 { 00:27:05.665 "method": "bdev_nvme_start_mdns_discovery", 00:27:05.665 "params": { 00:27:05.665 "name": "cdc", 00:27:05.665 "svcname": "_nvme-disc._tcp", 00:27:05.665 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:05.665 } 00:27:05.665 } 00:27:05.665 Got JSON-RPC error response 00:27:05.665 GoRPCClient: error on JSON-RPC call 00:27:05.665 2024/05/15 02:27:53 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 88053 00:27:05.665 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 88053 00:27:05.665 [2024-05-15 02:27:53.571091] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 88076 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:27:05.923 Got SIGTERM, quitting. 00:27:05.923 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:05.923 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:05.923 avahi-daemon 0.8 exiting. 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:05.923 rmmod nvme_tcp 00:27:05.923 rmmod nvme_fabrics 00:27:05.923 rmmod nvme_keyring 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 88022 ']' 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 88022 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@946 -- # '[' -z 88022 ']' 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@950 -- # kill -0 88022 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # uname 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88022 00:27:05.923 killing process with pid 88022 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88022' 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@965 -- # kill 88022 00:27:05.923 [2024-05-15 02:27:53.867527] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:05.923 02:27:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@970 -- # wait 88022 00:27:06.182 02:27:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:06.182 02:27:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:06.182 02:27:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:06.182 02:27:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:06.182 02:27:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:06.182 02:27:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.182 02:27:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.182 02:27:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.182 02:27:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:06.182 00:27:06.182 real 0m19.953s 00:27:06.182 user 0m39.734s 00:27:06.182 sys 0m1.877s 00:27:06.182 02:27:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:06.182 02:27:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:06.182 ************************************ 00:27:06.182 END TEST nvmf_mdns_discovery 00:27:06.182 ************************************ 00:27:06.182 02:27:54 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:27:06.182 02:27:54 nvmf_tcp -- nvmf/nvmf.sh@116 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:06.182 02:27:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:06.182 02:27:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:06.182 02:27:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:06.182 ************************************ 00:27:06.182 START TEST nvmf_host_multipath 00:27:06.182 ************************************ 00:27:06.182 02:27:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:06.441 * Looking for test storage... 00:27:06.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:06.441 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:06.442 Cannot find device "nvmf_tgt_br" 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:06.442 Cannot find device "nvmf_tgt_br2" 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:06.442 Cannot find device "nvmf_tgt_br" 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:06.442 Cannot find device "nvmf_tgt_br2" 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:06.442 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:06.442 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:06.442 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:06.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:27:06.701 00:27:06.701 --- 10.0.0.2 ping statistics --- 00:27:06.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.701 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:06.701 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:06.701 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:27:06.701 00:27:06.701 --- 10.0.0.3 ping statistics --- 00:27:06.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.701 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:06.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:27:06.701 00:27:06.701 --- 10.0.0.1 ping statistics --- 00:27:06.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.701 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=88527 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 88527 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 88527 ']' 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:06.701 02:27:54 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:06.701 [2024-05-15 02:27:54.676133] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:27:06.701 [2024-05-15 02:27:54.676269] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.959 [2024-05-15 02:27:54.834212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:06.959 [2024-05-15 02:27:54.916126] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.959 [2024-05-15 02:27:54.916204] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.959 [2024-05-15 02:27:54.916221] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.959 [2024-05-15 02:27:54.916233] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.959 [2024-05-15 02:27:54.916244] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.959 [2024-05-15 02:27:54.916574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.959 [2024-05-15 02:27:54.916587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.216 02:27:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:07.216 02:27:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:27:07.216 02:27:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:07.216 02:27:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:07.216 02:27:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:07.216 02:27:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.216 02:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=88527 00:27:07.216 02:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:07.474 [2024-05-15 02:27:55.393280] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.474 02:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:08.038 Malloc0 00:27:08.038 02:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:08.038 02:27:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:08.295 02:27:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.554 [2024-05-15 02:27:56.536757] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:08.554 [2024-05-15 02:27:56.537234] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.554 02:27:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:08.812 [2024-05-15 02:27:56.777380] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:08.812 02:27:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=88605 00:27:08.812 02:27:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:08.812 02:27:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:08.812 02:27:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 88605 /var/tmp/bdevperf.sock 00:27:08.812 02:27:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 88605 ']' 00:27:08.812 02:27:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:08.812 02:27:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:08.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:08.812 02:27:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:08.812 02:27:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:08.812 02:27:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:09.379 02:27:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:09.379 02:27:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:27:09.379 02:27:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:09.637 02:27:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:10.202 Nvme0n1 00:27:10.202 02:27:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:10.460 Nvme0n1 00:27:10.460 02:27:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:27:10.460 02:27:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:11.841 02:27:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:27:11.841 02:27:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:11.841 02:27:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:12.099 02:28:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:27:12.099 02:28:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88662 00:27:12.099 02:28:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88527 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:12.099 02:28:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:18.677 02:28:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:18.677 02:28:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:18.677 02:28:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:18.677 02:28:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:18.677 Attaching 4 probes... 00:27:18.677 @path[10.0.0.2, 4421]: 15716 00:27:18.677 @path[10.0.0.2, 4421]: 16651 00:27:18.677 @path[10.0.0.2, 4421]: 17060 00:27:18.677 @path[10.0.0.2, 4421]: 16163 00:27:18.677 @path[10.0.0.2, 4421]: 16928 00:27:18.677 02:28:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:18.677 02:28:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:18.677 02:28:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:18.677 02:28:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:18.677 02:28:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:18.677 02:28:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:18.677 02:28:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88662 00:27:18.677 02:28:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:18.677 02:28:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:27:18.677 02:28:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:19.242 02:28:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:19.242 02:28:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:27:19.242 02:28:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88757 00:27:19.242 02:28:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88527 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:19.242 02:28:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:25.847 02:28:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:25.847 02:28:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:25.847 02:28:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:27:25.847 02:28:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:25.847 Attaching 4 probes... 00:27:25.847 @path[10.0.0.2, 4420]: 15115 00:27:25.847 @path[10.0.0.2, 4420]: 15094 00:27:25.847 @path[10.0.0.2, 4420]: 16929 00:27:25.847 @path[10.0.0.2, 4420]: 16740 00:27:25.847 @path[10.0.0.2, 4420]: 16523 00:27:25.847 02:28:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:25.847 02:28:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:25.847 02:28:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:25.847 02:28:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:27:25.847 02:28:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:25.847 02:28:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:25.847 02:28:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88757 00:27:25.847 02:28:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:25.847 02:28:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:27:25.847 02:28:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:26.104 02:28:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:26.362 02:28:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:27:26.362 02:28:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88850 00:27:26.362 02:28:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:26.362 02:28:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88527 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:32.919 02:28:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:32.919 02:28:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:32.919 02:28:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:32.919 02:28:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:32.919 Attaching 4 probes... 00:27:32.919 @path[10.0.0.2, 4421]: 13773 00:27:32.919 @path[10.0.0.2, 4421]: 16522 00:27:32.919 @path[10.0.0.2, 4421]: 16259 00:27:32.919 @path[10.0.0.2, 4421]: 16294 00:27:32.919 @path[10.0.0.2, 4421]: 16878 00:27:32.919 02:28:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:32.919 02:28:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:32.919 02:28:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:32.919 02:28:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:32.919 02:28:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:32.919 02:28:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:32.919 02:28:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88850 00:27:32.919 02:28:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:32.919 02:28:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:27:32.919 02:28:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:32.919 02:28:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:33.176 02:28:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:27:33.176 02:28:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88940 00:27:33.176 02:28:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:33.176 02:28:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88527 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:39.732 02:28:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:39.732 02:28:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:27:39.732 02:28:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:27:39.732 02:28:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:39.732 Attaching 4 probes... 00:27:39.732 00:27:39.732 00:27:39.732 00:27:39.732 00:27:39.732 00:27:39.732 02:28:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:39.732 02:28:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:39.732 02:28:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:39.732 02:28:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:27:39.733 02:28:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:27:39.733 02:28:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:27:39.733 02:28:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88940 00:27:39.733 02:28:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:39.733 02:28:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:27:39.733 02:28:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:39.990 02:28:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:40.248 02:28:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:27:40.248 02:28:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89033 00:27:40.248 02:28:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88527 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:40.248 02:28:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:46.828 02:28:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:46.828 02:28:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:46.828 02:28:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:46.828 02:28:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:46.828 Attaching 4 probes... 00:27:46.828 @path[10.0.0.2, 4421]: 13736 00:27:46.828 @path[10.0.0.2, 4421]: 16871 00:27:46.828 @path[10.0.0.2, 4421]: 16884 00:27:46.828 @path[10.0.0.2, 4421]: 16567 00:27:46.828 @path[10.0.0.2, 4421]: 16661 00:27:46.828 02:28:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:46.828 02:28:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:46.828 02:28:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:46.828 02:28:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:46.828 02:28:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:46.828 02:28:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:46.828 02:28:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89033 00:27:46.828 02:28:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:46.828 02:28:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:46.828 [2024-05-15 02:28:34.728233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.828 [2024-05-15 02:28:34.728548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.728992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.829 [2024-05-15 02:28:34.729282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.830 [2024-05-15 02:28:34.729291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.830 [2024-05-15 02:28:34.729300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.830 [2024-05-15 02:28:34.729308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.830 [2024-05-15 02:28:34.729317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.830 [2024-05-15 02:28:34.729326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.830 [2024-05-15 02:28:34.729335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.830 [2024-05-15 02:28:34.729343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.830 [2024-05-15 02:28:34.729352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.830 [2024-05-15 02:28:34.729361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.830 [2024-05-15 02:28:34.729370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.830 [2024-05-15 02:28:34.729378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.830 [2024-05-15 02:28:34.729397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.830 [2024-05-15 02:28:34.729407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228b900 is same with the state(5) to be set 00:27:46.830 02:28:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:27:47.827 02:28:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:27:47.827 02:28:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89122 00:27:47.827 02:28:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88527 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:47.827 02:28:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:54.384 02:28:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:54.384 02:28:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:54.384 02:28:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:27:54.384 02:28:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:54.384 Attaching 4 probes... 00:27:54.384 @path[10.0.0.2, 4420]: 14215 00:27:54.384 @path[10.0.0.2, 4420]: 16026 00:27:54.384 @path[10.0.0.2, 4420]: 15147 00:27:54.384 @path[10.0.0.2, 4420]: 15577 00:27:54.384 @path[10.0.0.2, 4420]: 15040 00:27:54.384 02:28:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:54.384 02:28:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:54.384 02:28:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:54.384 02:28:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:27:54.384 02:28:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:54.384 02:28:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:54.384 02:28:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89122 00:27:54.384 02:28:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:54.384 02:28:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:54.384 [2024-05-15 02:28:42.361635] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:54.384 02:28:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:54.950 02:28:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:28:01.506 02:28:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:28:01.506 02:28:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89237 00:28:01.506 02:28:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:01.506 02:28:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88527 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:06.766 02:28:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:06.766 02:28:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:07.024 Attaching 4 probes... 00:28:07.024 @path[10.0.0.2, 4421]: 16070 00:28:07.024 @path[10.0.0.2, 4421]: 16365 00:28:07.024 @path[10.0.0.2, 4421]: 16319 00:28:07.024 @path[10.0.0.2, 4421]: 16202 00:28:07.024 @path[10.0.0.2, 4421]: 16088 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89237 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 88605 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 88605 ']' 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 88605 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88605 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88605' 00:28:07.024 killing process with pid 88605 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 88605 00:28:07.024 02:28:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 88605 00:28:07.292 Connection closed with partial response: 00:28:07.292 00:28:07.292 00:28:07.292 02:28:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 88605 00:28:07.292 02:28:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:07.292 [2024-05-15 02:27:56.859360] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:28:07.292 [2024-05-15 02:27:56.859498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88605 ] 00:28:07.292 [2024-05-15 02:27:56.994837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.292 [2024-05-15 02:27:57.055179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:07.292 Running I/O for 90 seconds... 00:28:07.292 [2024-05-15 02:28:07.205997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.206108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:07.292 [2024-05-15 02:28:07.206207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.206241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:07.292 [2024-05-15 02:28:07.206278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.206306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:07.292 [2024-05-15 02:28:07.206341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.206369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:07.292 [2024-05-15 02:28:07.206433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.206463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:07.292 [2024-05-15 02:28:07.206499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.206525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:07.292 [2024-05-15 02:28:07.206557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.206581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:07.292 [2024-05-15 02:28:07.206614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.206642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:07.292 [2024-05-15 02:28:07.207456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.207496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:07.292 [2024-05-15 02:28:07.207533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.207558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:07.292 [2024-05-15 02:28:07.207591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.207647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:07.292 [2024-05-15 02:28:07.207686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.207711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:07.292 [2024-05-15 02:28:07.207745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.207770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:07.292 [2024-05-15 02:28:07.207803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.207830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:07.292 [2024-05-15 02:28:07.207862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.207886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:07.292 [2024-05-15 02:28:07.207916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.292 [2024-05-15 02:28:07.207939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.207969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.293 [2024-05-15 02:28:07.207992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.208023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.293 [2024-05-15 02:28:07.208046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.208076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.293 [2024-05-15 02:28:07.208099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.208130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.293 [2024-05-15 02:28:07.208152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.208182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.293 [2024-05-15 02:28:07.208209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.208241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.293 [2024-05-15 02:28:07.208265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.208302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.293 [2024-05-15 02:28:07.208330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.208404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.293 [2024-05-15 02:28:07.208438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.211540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.211623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.211682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.211713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.211751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.211778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.211819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.211845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.211879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.211905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.211940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.211967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.212027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.212086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.212143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.212201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.212267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.212357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.212451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.212516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.212582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.212646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.212707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.212770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.212831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.212890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.212950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.212983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.213008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.213041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.213066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.213101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.213143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.213180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.213208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.213244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.213270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.213304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.213329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.213361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.213404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.213441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.213465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.213507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.213531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.213561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.213586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:07.293 [2024-05-15 02:28:07.213619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.293 [2024-05-15 02:28:07.213646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.213680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:07.213705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.215324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.215371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.215445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.215476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.215512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.215557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.215598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.215629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.215673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.215703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.215741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.215767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.215800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.215826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.215860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.215886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.215920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.215948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.215982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.216008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.216042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.216068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.216102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.216127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.216161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.216186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.216219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.216245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.216279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.216306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:07.216364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:07.216411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.850967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.294 [2024-05-15 02:28:13.851045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.851733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.851749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.852405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.852434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.852464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.852481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.852506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.852521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.852544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.852561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:07.294 [2024-05-15 02:28:13.852584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.294 [2024-05-15 02:28:13.852600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.852624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.852639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.852663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.852691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.852717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.852733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.852757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.852773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.852797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.852814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.852838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.852854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.852878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.852893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.852918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.852934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.852958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.852974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.852998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.853794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.853810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.854052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.854080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.854111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.854128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.854155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.854171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.854198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.854213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.854240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.854256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.854282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.854298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.854324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.854340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.854367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.854383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.854427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.854444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.854485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.295 [2024-05-15 02:28:13.854503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:07.295 [2024-05-15 02:28:13.854529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.854546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.854572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.854588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.854625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.854641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.854667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.854683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.854709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.854725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.854751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.854768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.854793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.854810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.854836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.854852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.854878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.854894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.854920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.854937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.854963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.854980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:07.296 [2024-05-15 02:28:13.855919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.296 [2024-05-15 02:28:13.855935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.855961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.855978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.856967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.856993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.857010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.857036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.857052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:13.857079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:13.857095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:21.100313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:21.100400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:21.100464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:21.100486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:21.100510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:21.100553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:21.100577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:21.100594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:21.100626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:21.100641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:21.100663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:21.100679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:21.100702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:21.100718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:21.100740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:21.100755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:21.100777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:21.100793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:21.100815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:21.100831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:21.100853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:21.100869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:21.100891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:21.100906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:21.100928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.297 [2024-05-15 02:28:21.100943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:07.297 [2024-05-15 02:28:21.100965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.100980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.101026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.101067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.101105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.101145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.101184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.101222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.101280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.101319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.101357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.101412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.101452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.101490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.101528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.101578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.101616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.101991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.102976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.102999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.103024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.103040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:07.298 [2024-05-15 02:28:21.103065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.298 [2024-05-15 02:28:21.103082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.103964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.103981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.104021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.104068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.104110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.104150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.104196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.104247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.104296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.104337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.104378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.299 [2024-05-15 02:28:21.104437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.299 [2024-05-15 02:28:21.104477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.299 [2024-05-15 02:28:21.104518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.299 [2024-05-15 02:28:21.104558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.299 [2024-05-15 02:28:21.104610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.299 [2024-05-15 02:28:21.104656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.299 [2024-05-15 02:28:21.104696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.299 [2024-05-15 02:28:21.104736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.104760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.299 [2024-05-15 02:28:21.104777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:07.299 [2024-05-15 02:28:21.105079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.299 [2024-05-15 02:28:21.105109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.105965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.105994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.106010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.106039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.106063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.106095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.106112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.106140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.106156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.106185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.106202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.106233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.300 [2024-05-15 02:28:21.106262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.106302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.106319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.106347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.106363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.106406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.106426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.106455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.106471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.106500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.106516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.106545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.106561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.106589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.106605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:21.106634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:21.106650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:34.730100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.300 [2024-05-15 02:28:34.730144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:34.730163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.300 [2024-05-15 02:28:34.730177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:34.730191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.300 [2024-05-15 02:28:34.730205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:34.730220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.300 [2024-05-15 02:28:34.730233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:34.730246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f549f0 is same with the state(5) to be set 00:28:07.300 [2024-05-15 02:28:34.730327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:34.730349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:34.730373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:34.730412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:34.730437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:34.730452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:34.730468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:34.730482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.300 [2024-05-15 02:28:34.730498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.300 [2024-05-15 02:28:34.730512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.730542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.730572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.730601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.730656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.730686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.730715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.730745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.730775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.730805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.730835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.730864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.730893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.730923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.730952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.730982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.730997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.301 [2024-05-15 02:28:34.731638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.301 [2024-05-15 02:28:34.731653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.731668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.731683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.731698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.731713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.731728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.731743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.731758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.731774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.731788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.731804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.731825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.731842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.731857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.731873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.731887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.731903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.731917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.731933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.731947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.731962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.731977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.731992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.732007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.732037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.732068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.732098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.732128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.732158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.732188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.732223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.732253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.732283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.732313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.732343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.302 [2024-05-15 02:28:34.732373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.302 [2024-05-15 02:28:34.732923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.302 [2024-05-15 02:28:34.732938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.732952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.732968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.732982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.732998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.303 [2024-05-15 02:28:34.733849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.303 [2024-05-15 02:28:34.733881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.303 [2024-05-15 02:28:34.733913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.303 [2024-05-15 02:28:34.733960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.733976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.303 [2024-05-15 02:28:34.733991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.734007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.303 [2024-05-15 02:28:34.734021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.734037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.303 [2024-05-15 02:28:34.734052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.734067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.303 [2024-05-15 02:28:34.734082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.734097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.303 [2024-05-15 02:28:34.734112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.734127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.303 [2024-05-15 02:28:34.734142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.734158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.303 [2024-05-15 02:28:34.734172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.734188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.303 [2024-05-15 02:28:34.734202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.734228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.303 [2024-05-15 02:28:34.734243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.303 [2024-05-15 02:28:34.734259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.303 [2024-05-15 02:28:34.734274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.304 [2024-05-15 02:28:34.734289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.304 [2024-05-15 02:28:34.734304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.304 [2024-05-15 02:28:34.734320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.304 [2024-05-15 02:28:34.734334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.304 [2024-05-15 02:28:34.734350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.304 [2024-05-15 02:28:34.734364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.304 [2024-05-15 02:28:34.734380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.304 [2024-05-15 02:28:34.734421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.304 [2024-05-15 02:28:34.734462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:07.304 [2024-05-15 02:28:34.734477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:07.304 [2024-05-15 02:28:34.734489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119296 len:8 PRP1 0x0 PRP2 0x0 00:28:07.304 [2024-05-15 02:28:34.734503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.304 [2024-05-15 02:28:34.734551] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f82310 was disconnected and freed. reset controller. 00:28:07.304 [2024-05-15 02:28:34.735998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.304 [2024-05-15 02:28:34.736054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f549f0 (9): Bad file descriptor 00:28:07.304 [2024-05-15 02:28:34.736213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.304 [2024-05-15 02:28:34.736306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.304 [2024-05-15 02:28:34.736343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f549f0 with addr=10.0.0.2, port=4421 00:28:07.304 [2024-05-15 02:28:34.736371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f549f0 is same with the state(5) to be set 00:28:07.304 [2024-05-15 02:28:34.736435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f549f0 (9): Bad file descriptor 00:28:07.304 [2024-05-15 02:28:34.736475] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.304 [2024-05-15 02:28:34.736504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.304 [2024-05-15 02:28:34.736528] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.304 [2024-05-15 02:28:34.736567] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.304 [2024-05-15 02:28:34.736610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.304 [2024-05-15 02:28:44.834873] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:07.304 Received shutdown signal, test time was about 56.393301 seconds 00:28:07.304 00:28:07.304 Latency(us) 00:28:07.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.304 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:07.304 Verification LBA range: start 0x0 length 0x4000 00:28:07.304 Nvme0n1 : 56.39 6906.09 26.98 0.00 0.00 18499.91 484.07 7046430.72 00:28:07.304 =================================================================================================================== 00:28:07.304 Total : 6906.09 26.98 0.00 0.00 18499.91 484.07 7046430.72 00:28:07.304 02:28:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:07.610 rmmod nvme_tcp 00:28:07.610 rmmod nvme_fabrics 00:28:07.610 rmmod nvme_keyring 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 88527 ']' 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 88527 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 88527 ']' 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 88527 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88527 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:07.610 killing process with pid 88527 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88527' 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 88527 00:28:07.610 [2024-05-15 02:28:55.600594] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:07.610 02:28:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 88527 00:28:07.899 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:07.899 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:07.899 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:07.899 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:07.899 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:07.899 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.899 02:28:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:07.899 02:28:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.899 02:28:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:07.899 ************************************ 00:28:07.899 END TEST nvmf_host_multipath 00:28:07.899 ************************************ 00:28:07.899 00:28:07.899 real 1m1.701s 00:28:07.899 user 2m56.294s 00:28:07.899 sys 0m13.854s 00:28:07.899 02:28:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:07.899 02:28:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:07.899 02:28:55 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:07.899 02:28:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:07.899 02:28:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:07.899 02:28:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:07.899 ************************************ 00:28:07.899 START TEST nvmf_timeout 00:28:07.899 ************************************ 00:28:07.899 02:28:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:08.160 * Looking for test storage... 00:28:08.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.160 02:28:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.160 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:08.160 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:08.161 Cannot find device "nvmf_tgt_br" 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:08.161 Cannot find device "nvmf_tgt_br2" 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:08.161 Cannot find device "nvmf_tgt_br" 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:08.161 Cannot find device "nvmf_tgt_br2" 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:08.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:08.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:08.161 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:08.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:28:08.420 00:28:08.420 --- 10.0.0.2 ping statistics --- 00:28:08.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.420 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:08.420 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:08.420 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:28:08.420 00:28:08.420 --- 10.0.0.3 ping statistics --- 00:28:08.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.420 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:08.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:28:08.420 00:28:08.420 --- 10.0.0.1 ping statistics --- 00:28:08.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.420 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=89509 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 89509 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 89509 ']' 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:08.420 02:28:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:08.420 [2024-05-15 02:28:56.412684] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:28:08.420 [2024-05-15 02:28:56.412763] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.679 [2024-05-15 02:28:56.547299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:08.679 [2024-05-15 02:28:56.617572] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.679 [2024-05-15 02:28:56.617625] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.679 [2024-05-15 02:28:56.617639] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.679 [2024-05-15 02:28:56.617649] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.679 [2024-05-15 02:28:56.617658] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.679 [2024-05-15 02:28:56.617779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.679 [2024-05-15 02:28:56.617791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.938 02:28:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:08.938 02:28:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:28:08.938 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:08.938 02:28:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:08.938 02:28:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:08.938 02:28:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.938 02:28:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:08.938 02:28:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:09.196 [2024-05-15 02:28:57.018461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.196 02:28:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:09.455 Malloc0 00:28:09.455 02:28:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:09.713 02:28:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:09.972 02:28:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:10.230 [2024-05-15 02:28:58.109717] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:10.230 [2024-05-15 02:28:58.109995] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.230 02:28:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:10.230 02:28:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=89580 00:28:10.230 02:28:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 89580 /var/tmp/bdevperf.sock 00:28:10.230 02:28:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 89580 ']' 00:28:10.230 02:28:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:10.230 02:28:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:10.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:10.230 02:28:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:10.230 02:28:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:10.230 02:28:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.230 [2024-05-15 02:28:58.174920] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:28:10.230 [2024-05-15 02:28:58.175019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89580 ] 00:28:10.488 [2024-05-15 02:28:58.309705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.488 [2024-05-15 02:28:58.369239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.488 02:28:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:10.488 02:28:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:28:10.488 02:28:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:10.747 02:28:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:11.004 NVMe0n1 00:28:11.263 02:28:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=89604 00:28:11.263 02:28:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:28:11.263 02:28:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:11.263 Running I/O for 10 seconds... 00:28:12.197 02:29:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:12.456 [2024-05-15 02:29:00.313245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe6c10 is same with the state(5) to be set 00:28:12.456 [2024-05-15 02:29:00.313302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe6c10 is same with the state(5) to be set 00:28:12.456 [2024-05-15 02:29:00.313740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.456 [2024-05-15 02:29:00.313784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.456 [2024-05-15 02:29:00.313807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.456 [2024-05-15 02:29:00.313819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.313831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.457 [2024-05-15 02:29:00.313841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.313852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.457 [2024-05-15 02:29:00.313862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.313874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.457 [2024-05-15 02:29:00.313883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.313894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.457 [2024-05-15 02:29:00.313904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.313916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.457 [2024-05-15 02:29:00.313925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.313946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.457 [2024-05-15 02:29:00.313958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.313969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.457 [2024-05-15 02:29:00.313979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.313990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.457 [2024-05-15 02:29:00.314000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.457 [2024-05-15 02:29:00.314021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.457 [2024-05-15 02:29:00.314805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.457 [2024-05-15 02:29:00.314814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.314827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.458 [2024-05-15 02:29:00.314836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.314847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.458 [2024-05-15 02:29:00.314857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.314868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.458 [2024-05-15 02:29:00.314883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.314895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.458 [2024-05-15 02:29:00.314904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.314916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.314925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.314937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.314947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.314958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.314968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.314979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.314989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.458 [2024-05-15 02:29:00.315073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.458 [2024-05-15 02:29:00.315094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.458 [2024-05-15 02:29:00.315115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.458 [2024-05-15 02:29:00.315136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.458 [2024-05-15 02:29:00.315157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.458 [2024-05-15 02:29:00.315178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.458 [2024-05-15 02:29:00.315199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.458 [2024-05-15 02:29:00.315221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.458 [2024-05-15 02:29:00.315243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.458 [2024-05-15 02:29:00.315739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.458 [2024-05-15 02:29:00.315751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.315760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.315771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.315781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.315793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.315802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.315813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.315823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.315834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.315843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.315855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.315864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.315876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.315885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.315896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.315906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.315917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.315927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.315938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.315950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.315961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.315971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.315983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.315993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.459 [2024-05-15 02:29:00.316516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.459 [2024-05-15 02:29:00.316537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.459 [2024-05-15 02:29:00.316558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.459 [2024-05-15 02:29:00.316579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.459 [2024-05-15 02:29:00.316610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-05-15 02:29:00.316659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.459 [2024-05-15 02:29:00.316691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.459 [2024-05-15 02:29:00.316709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77464 len:8 PRP1 0x0 PRP2 0x0 00:28:12.460 [2024-05-15 02:29:00.316720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.460 [2024-05-15 02:29:00.316765] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xefb030 was disconnected and freed. reset controller. 00:28:12.460 [2024-05-15 02:29:00.317029] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:12.460 [2024-05-15 02:29:00.317115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe89a00 (9): Bad file descriptor 00:28:12.460 [2024-05-15 02:29:00.317212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.460 [2024-05-15 02:29:00.317268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.460 [2024-05-15 02:29:00.317285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe89a00 with addr=10.0.0.2, port=4420 00:28:12.460 [2024-05-15 02:29:00.317296] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe89a00 is same with the state(5) to be set 00:28:12.460 [2024-05-15 02:29:00.317315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe89a00 (9): Bad file descriptor 00:28:12.460 [2024-05-15 02:29:00.317332] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:12.460 [2024-05-15 02:29:00.317346] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:12.460 [2024-05-15 02:29:00.317356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:12.460 [2024-05-15 02:29:00.317377] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.460 [2024-05-15 02:29:00.317402] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:12.460 02:29:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:28:14.358 [2024-05-15 02:29:02.317654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.358 [2024-05-15 02:29:02.317771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.358 [2024-05-15 02:29:02.317793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe89a00 with addr=10.0.0.2, port=4420 00:28:14.358 [2024-05-15 02:29:02.317808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe89a00 is same with the state(5) to be set 00:28:14.358 [2024-05-15 02:29:02.317836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe89a00 (9): Bad file descriptor 00:28:14.358 [2024-05-15 02:29:02.317870] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:14.358 [2024-05-15 02:29:02.317883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:14.358 [2024-05-15 02:29:02.317894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:14.358 [2024-05-15 02:29:02.317922] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:14.359 [2024-05-15 02:29:02.317934] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:14.359 02:29:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:28:14.359 02:29:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:14.359 02:29:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:14.923 02:29:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:28:14.923 02:29:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:28:14.923 02:29:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:14.923 02:29:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:14.923 02:29:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:28:15.181 02:29:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:28:16.554 [2024-05-15 02:29:04.318112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.554 [2024-05-15 02:29:04.318223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.554 [2024-05-15 02:29:04.318244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe89a00 with addr=10.0.0.2, port=4420 00:28:16.554 [2024-05-15 02:29:04.318258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe89a00 is same with the state(5) to be set 00:28:16.554 [2024-05-15 02:29:04.318284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe89a00 (9): Bad file descriptor 00:28:16.554 [2024-05-15 02:29:04.318304] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:16.554 [2024-05-15 02:29:04.318314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:16.554 [2024-05-15 02:29:04.318324] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:16.554 [2024-05-15 02:29:04.318351] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.554 [2024-05-15 02:29:04.318363] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:18.452 [2024-05-15 02:29:06.318496] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.386 00:28:19.386 Latency(us) 00:28:19.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.386 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:19.386 Verification LBA range: start 0x0 length 0x4000 00:28:19.386 NVMe0n1 : 8.16 1179.91 4.61 15.68 0.00 106897.10 2263.97 7015926.69 00:28:19.386 =================================================================================================================== 00:28:19.386 Total : 1179.91 4.61 15.68 0.00 106897.10 2263.97 7015926.69 00:28:19.386 0 00:28:20.056 02:29:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:28:20.056 02:29:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:20.056 02:29:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:20.314 02:29:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:28:20.314 02:29:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:28:20.314 02:29:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:20.314 02:29:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:20.572 02:29:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:28:20.572 02:29:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 89604 00:28:20.572 02:29:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 89580 00:28:20.572 02:29:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 89580 ']' 00:28:20.572 02:29:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 89580 00:28:20.572 02:29:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:28:20.572 02:29:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:20.572 02:29:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89580 00:28:20.572 02:29:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:28:20.572 02:29:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:28:20.830 killing process with pid 89580 00:28:20.830 02:29:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89580' 00:28:20.830 02:29:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 89580 00:28:20.830 Received shutdown signal, test time was about 9.431864 seconds 00:28:20.830 00:28:20.830 Latency(us) 00:28:20.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.830 =================================================================================================================== 00:28:20.830 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.830 02:29:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 89580 00:28:20.830 02:29:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.088 [2024-05-15 02:29:08.981672] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.088 02:29:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=89704 00:28:21.088 02:29:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:21.088 02:29:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 89704 /var/tmp/bdevperf.sock 00:28:21.088 02:29:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 89704 ']' 00:28:21.088 02:29:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:21.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:21.088 02:29:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:21.088 02:29:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:21.088 02:29:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:21.088 02:29:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:21.088 [2024-05-15 02:29:09.042690] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:28:21.088 [2024-05-15 02:29:09.042770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89704 ] 00:28:21.347 [2024-05-15 02:29:09.179692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.347 [2024-05-15 02:29:09.237813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.347 02:29:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:21.347 02:29:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:28:21.347 02:29:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:21.916 02:29:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:28:22.175 NVMe0n1 00:28:22.175 02:29:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=89732 00:28:22.175 02:29:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:22.175 02:29:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:28:22.175 Running I/O for 10 seconds... 00:28:23.110 02:29:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:23.372 [2024-05-15 02:29:11.309297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e2760 is same with the state(5) to be set 00:28:23.372 [2024-05-15 02:29:11.309352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e2760 is same with the state(5) to be set 00:28:23.372 [2024-05-15 02:29:11.309364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e2760 is same with the state(5) to be set 00:28:23.372 [2024-05-15 02:29:11.309373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e2760 is same with the state(5) to be set 00:28:23.372 [2024-05-15 02:29:11.309382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e2760 is same with the state(5) to be set 00:28:23.372 [2024-05-15 02:29:11.309407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e2760 is same with the state(5) to be set 00:28:23.372 [2024-05-15 02:29:11.309416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e2760 is same with the state(5) to be set 00:28:23.372 [2024-05-15 02:29:11.309425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e2760 is same with the state(5) to be set 00:28:23.372 [2024-05-15 02:29:11.310962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.372 [2024-05-15 02:29:11.311311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.372 [2024-05-15 02:29:11.311332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.372 [2024-05-15 02:29:11.311353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.372 [2024-05-15 02:29:11.311374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.372 [2024-05-15 02:29:11.311412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.372 [2024-05-15 02:29:11.311436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.372 [2024-05-15 02:29:11.311470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-05-15 02:29:11.311742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.372 [2024-05-15 02:29:11.311754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.373 [2024-05-15 02:29:11.311763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.311780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.373 [2024-05-15 02:29:11.311789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.311800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.373 [2024-05-15 02:29:11.311810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.311821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.373 [2024-05-15 02:29:11.311830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.311841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.373 [2024-05-15 02:29:11.311851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.311862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.373 [2024-05-15 02:29:11.311872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.311883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.373 [2024-05-15 02:29:11.311892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.311911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.373 [2024-05-15 02:29:11.311927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.311945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.311961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.311981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.373 [2024-05-15 02:29:11.312734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.373 [2024-05-15 02:29:11.312744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.312755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.312764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.312776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.312785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.312796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.312806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.312817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.312826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.312837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.312846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.312858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.312867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.312883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.312907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.312927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.312944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.312966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.312984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.374 [2024-05-15 02:29:11.313626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.374 [2024-05-15 02:29:11.313637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.375 [2024-05-15 02:29:11.313647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.313658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.375 [2024-05-15 02:29:11.313667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.313702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.313715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77616 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.313724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.313738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.313746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.313755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77624 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.313764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.313777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.313785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.313794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77632 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.313803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.313813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.313820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.313828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77640 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.313842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.313858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.313870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.313884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.313899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.313916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.313930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.313957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77656 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.313975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.313994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.314008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.314023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77664 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.314040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.314056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.314069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.314078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77672 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.314088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.314097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.314105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.314113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77680 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.314122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.314131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.314139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.314147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77688 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.314157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.314169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.314177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.314186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77696 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.314196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.314205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.314212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.314220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77704 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.314229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.314239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.314246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.314254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77712 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.314263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.314273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.314280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.314288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77720 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.314297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.314307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.314314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.314322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76920 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.314331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.314341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.314349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.314362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76928 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.314377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.314410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.314426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.314442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76936 len:8 PRP1 0x0 PRP2 0x0 00:28:23.375 [2024-05-15 02:29:11.314459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.375 [2024-05-15 02:29:11.314477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.375 [2024-05-15 02:29:11.314491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.375 [2024-05-15 02:29:11.314506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76944 len:8 PRP1 0x0 PRP2 0x0 00:28:23.376 [2024-05-15 02:29:11.314523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.314545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.376 [2024-05-15 02:29:11.314559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.376 [2024-05-15 02:29:11.314568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76952 len:8 PRP1 0x0 PRP2 0x0 00:28:23.376 [2024-05-15 02:29:11.314577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.314587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.376 [2024-05-15 02:29:11.314594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.376 [2024-05-15 02:29:11.314602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76960 len:8 PRP1 0x0 PRP2 0x0 00:28:23.376 [2024-05-15 02:29:11.314611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.314621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.376 [2024-05-15 02:29:11.314628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.376 [2024-05-15 02:29:11.314636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76968 len:8 PRP1 0x0 PRP2 0x0 00:28:23.376 [2024-05-15 02:29:11.314645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.314655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.376 [2024-05-15 02:29:11.314662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.376 [2024-05-15 02:29:11.314670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76976 len:8 PRP1 0x0 PRP2 0x0 00:28:23.376 [2024-05-15 02:29:11.314679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.314689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.376 [2024-05-15 02:29:11.314696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.376 [2024-05-15 02:29:11.314704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76984 len:8 PRP1 0x0 PRP2 0x0 00:28:23.376 [2024-05-15 02:29:11.314713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.314722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.376 [2024-05-15 02:29:11.314730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.376 [2024-05-15 02:29:11.314738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76992 len:8 PRP1 0x0 PRP2 0x0 00:28:23.376 [2024-05-15 02:29:11.314746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.314756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.376 [2024-05-15 02:29:11.314763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.376 [2024-05-15 02:29:11.314771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77000 len:8 PRP1 0x0 PRP2 0x0 00:28:23.376 [2024-05-15 02:29:11.314780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.314789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.376 [2024-05-15 02:29:11.314796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.376 [2024-05-15 02:29:11.314804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77008 len:8 PRP1 0x0 PRP2 0x0 00:28:23.376 [2024-05-15 02:29:11.314813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.314826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.376 [2024-05-15 02:29:11.314839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.376 [2024-05-15 02:29:11.314852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77016 len:8 PRP1 0x0 PRP2 0x0 00:28:23.376 [2024-05-15 02:29:11.314868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.314884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.376 [2024-05-15 02:29:11.314898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.376 [2024-05-15 02:29:11.314913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77024 len:8 PRP1 0x0 PRP2 0x0 00:28:23.376 [2024-05-15 02:29:11.314930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.314948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.376 [2024-05-15 02:29:11.314962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.376 [2024-05-15 02:29:11.314977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77032 len:8 PRP1 0x0 PRP2 0x0 00:28:23.376 [2024-05-15 02:29:11.314993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.315047] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22adf10 was disconnected and freed. reset controller. 00:28:23.376 [2024-05-15 02:29:11.315155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.376 [2024-05-15 02:29:11.315173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.315185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.376 [2024-05-15 02:29:11.315195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.315205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.376 [2024-05-15 02:29:11.315214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.315224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.376 [2024-05-15 02:29:11.315234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.376 [2024-05-15 02:29:11.315243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ca00 is same with the state(5) to be set 00:28:23.376 [2024-05-15 02:29:11.315519] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.376 [2024-05-15 02:29:11.315558] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223ca00 (9): Bad file descriptor 00:28:23.376 [2024-05-15 02:29:11.315665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.376 [2024-05-15 02:29:11.315716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.376 [2024-05-15 02:29:11.315733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223ca00 with addr=10.0.0.2, port=4420 00:28:23.376 [2024-05-15 02:29:11.315745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ca00 is same with the state(5) to be set 00:28:23.376 [2024-05-15 02:29:11.315764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223ca00 (9): Bad file descriptor 00:28:23.376 [2024-05-15 02:29:11.315781] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.376 [2024-05-15 02:29:11.315797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.376 [2024-05-15 02:29:11.315814] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.376 [2024-05-15 02:29:11.326147] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.376 [2024-05-15 02:29:11.326186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.376 02:29:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:28:24.427 [2024-05-15 02:29:12.326380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.427 [2024-05-15 02:29:12.326515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.427 [2024-05-15 02:29:12.326551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223ca00 with addr=10.0.0.2, port=4420 00:28:24.427 [2024-05-15 02:29:12.326567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ca00 is same with the state(5) to be set 00:28:24.427 [2024-05-15 02:29:12.326614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223ca00 (9): Bad file descriptor 00:28:24.427 [2024-05-15 02:29:12.326657] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.427 [2024-05-15 02:29:12.326670] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.427 [2024-05-15 02:29:12.326681] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.427 [2024-05-15 02:29:12.326716] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.427 [2024-05-15 02:29:12.326730] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.427 02:29:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.683 [2024-05-15 02:29:12.671019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.683 02:29:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 89732 00:28:25.611 [2024-05-15 02:29:13.337896] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:32.164 00:28:32.164 Latency(us) 00:28:32.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.164 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:32.164 Verification LBA range: start 0x0 length 0x4000 00:28:32.164 NVMe0n1 : 10.01 5127.66 20.03 0.00 0.00 24913.63 2204.39 3035150.89 00:28:32.164 =================================================================================================================== 00:28:32.164 Total : 5127.66 20.03 0.00 0.00 24913.63 2204.39 3035150.89 00:28:32.164 0 00:28:32.164 02:29:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=89788 00:28:32.164 02:29:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:32.164 02:29:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:28:32.421 Running I/O for 10 seconds... 00:28:33.371 02:29:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.631 [2024-05-15 02:29:21.495940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.495995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.496656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3840 is same with the state(5) to be set 00:28:33.631 [2024-05-15 02:29:21.498418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.631 [2024-05-15 02:29:21.498467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.631 [2024-05-15 02:29:21.498491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.631 [2024-05-15 02:29:21.498502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.631 [2024-05-15 02:29:21.498515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.631 [2024-05-15 02:29:21.498524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.631 [2024-05-15 02:29:21.498536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.631 [2024-05-15 02:29:21.498548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.631 [2024-05-15 02:29:21.498566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.631 [2024-05-15 02:29:21.498582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.631 [2024-05-15 02:29:21.498596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.631 [2024-05-15 02:29:21.498605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.631 [2024-05-15 02:29:21.498617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.631 [2024-05-15 02:29:21.498626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.631 [2024-05-15 02:29:21.498638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.631 [2024-05-15 02:29:21.498647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.631 [2024-05-15 02:29:21.498659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.631 [2024-05-15 02:29:21.498668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.498685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.498709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.498730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.498746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.498765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.498780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.498798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.498809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.498821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.498830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.498842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.498851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.498870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.498894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.498916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.498935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.498956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.498972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.498991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.499004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.499025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.499047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.499069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.499089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.499110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.499131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.499152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.499173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.499193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.499214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.499235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.499256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.499280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.632 [2024-05-15 02:29:21.499308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.632 [2024-05-15 02:29:21.499824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.632 [2024-05-15 02:29:21.499845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.499861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.499881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.499897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.499917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.499931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.499950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.499967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.499986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.633 [2024-05-15 02:29:21.500772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.633 [2024-05-15 02:29:21.500810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.633 [2024-05-15 02:29:21.500847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.633 [2024-05-15 02:29:21.500881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.633 [2024-05-15 02:29:21.500917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.633 [2024-05-15 02:29:21.500955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.500974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.500991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.501011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.501028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.501047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.501069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.501098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.501116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.633 [2024-05-15 02:29:21.501139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.633 [2024-05-15 02:29:21.501156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.501969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.634 [2024-05-15 02:29:21.501987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.502053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:33.634 [2024-05-15 02:29:21.502076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73000 len:8 PRP1 0x0 PRP2 0x0 00:28:33.634 [2024-05-15 02:29:21.502086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.502103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:33.634 [2024-05-15 02:29:21.502116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:33.634 [2024-05-15 02:29:21.502125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73008 len:8 PRP1 0x0 PRP2 0x0 00:28:33.634 [2024-05-15 02:29:21.502135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.502145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:33.634 [2024-05-15 02:29:21.502152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:33.634 [2024-05-15 02:29:21.502161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73016 len:8 PRP1 0x0 PRP2 0x0 00:28:33.634 [2024-05-15 02:29:21.502170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.502180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:33.634 [2024-05-15 02:29:21.502187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:33.634 [2024-05-15 02:29:21.502195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73024 len:8 PRP1 0x0 PRP2 0x0 00:28:33.634 [2024-05-15 02:29:21.502205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.502215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:33.634 [2024-05-15 02:29:21.502222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:33.634 [2024-05-15 02:29:21.502230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73032 len:8 PRP1 0x0 PRP2 0x0 00:28:33.634 [2024-05-15 02:29:21.502239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.502249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:33.634 [2024-05-15 02:29:21.502256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:33.634 [2024-05-15 02:29:21.502265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73040 len:8 PRP1 0x0 PRP2 0x0 00:28:33.634 [2024-05-15 02:29:21.502274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.502283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:33.634 [2024-05-15 02:29:21.502290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:33.634 [2024-05-15 02:29:21.502299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73048 len:8 PRP1 0x0 PRP2 0x0 00:28:33.634 [2024-05-15 02:29:21.502308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.502317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:33.634 [2024-05-15 02:29:21.502324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:33.634 [2024-05-15 02:29:21.502333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73056 len:8 PRP1 0x0 PRP2 0x0 00:28:33.634 [2024-05-15 02:29:21.502342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.634 [2024-05-15 02:29:21.502352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:33.635 [2024-05-15 02:29:21.502359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:33.635 [2024-05-15 02:29:21.502367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72296 len:8 PRP1 0x0 PRP2 0x0 00:28:33.635 [2024-05-15 02:29:21.502376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.635 [2024-05-15 02:29:21.502408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:33.635 [2024-05-15 02:29:21.502425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:33.635 [2024-05-15 02:29:21.502438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72304 len:8 PRP1 0x0 PRP2 0x0 00:28:33.635 [2024-05-15 02:29:21.502450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.635 [2024-05-15 02:29:21.502505] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22abfe0 was disconnected and freed. reset controller. 00:28:33.635 [2024-05-15 02:29:21.502645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.635 [2024-05-15 02:29:21.502674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.635 [2024-05-15 02:29:21.502688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.635 [2024-05-15 02:29:21.502698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.635 [2024-05-15 02:29:21.502708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.635 [2024-05-15 02:29:21.502718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.635 [2024-05-15 02:29:21.502736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.635 [2024-05-15 02:29:21.502760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.635 [2024-05-15 02:29:21.502778] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ca00 is same with the state(5) to be set 00:28:33.635 [2024-05-15 02:29:21.503080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.635 [2024-05-15 02:29:21.503139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223ca00 (9): Bad file descriptor 00:28:33.635 [2024-05-15 02:29:21.503290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-05-15 02:29:21.503365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-05-15 02:29:21.503414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223ca00 with addr=10.0.0.2, port=4420 00:28:33.635 [2024-05-15 02:29:21.503439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ca00 is same with the state(5) to be set 00:28:33.635 [2024-05-15 02:29:21.503464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223ca00 (9): Bad file descriptor 00:28:33.635 [2024-05-15 02:29:21.503481] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.635 [2024-05-15 02:29:21.503491] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.635 [2024-05-15 02:29:21.503502] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.635 [2024-05-15 02:29:21.503524] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.635 [2024-05-15 02:29:21.503536] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.635 02:29:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:28:34.616 [2024-05-15 02:29:22.503722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.616 [2024-05-15 02:29:22.503848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.616 [2024-05-15 02:29:22.503872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223ca00 with addr=10.0.0.2, port=4420 00:28:34.616 [2024-05-15 02:29:22.503886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ca00 is same with the state(5) to be set 00:28:34.616 [2024-05-15 02:29:22.503921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223ca00 (9): Bad file descriptor 00:28:34.616 [2024-05-15 02:29:22.503952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.617 [2024-05-15 02:29:22.503968] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.617 [2024-05-15 02:29:22.503980] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.617 [2024-05-15 02:29:22.504008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.617 [2024-05-15 02:29:22.504020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:35.552 [2024-05-15 02:29:23.504168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.552 [2024-05-15 02:29:23.504276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.552 [2024-05-15 02:29:23.504299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223ca00 with addr=10.0.0.2, port=4420 00:28:35.552 [2024-05-15 02:29:23.504313] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ca00 is same with the state(5) to be set 00:28:35.552 [2024-05-15 02:29:23.504341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223ca00 (9): Bad file descriptor 00:28:35.552 [2024-05-15 02:29:23.504361] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:35.552 [2024-05-15 02:29:23.504370] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:35.552 [2024-05-15 02:29:23.504381] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:35.552 [2024-05-15 02:29:23.504423] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.552 [2024-05-15 02:29:23.504436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:36.922 [2024-05-15 02:29:24.507949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.922 [2024-05-15 02:29:24.508060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.922 [2024-05-15 02:29:24.508081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223ca00 with addr=10.0.0.2, port=4420 00:28:36.922 [2024-05-15 02:29:24.508095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ca00 is same with the state(5) to be set 00:28:36.922 [2024-05-15 02:29:24.508371] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223ca00 (9): Bad file descriptor 00:28:36.922 [2024-05-15 02:29:24.508645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:36.922 [2024-05-15 02:29:24.508670] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:36.922 [2024-05-15 02:29:24.508682] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:36.922 [2024-05-15 02:29:24.512703] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:36.922 [2024-05-15 02:29:24.512740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:36.922 02:29:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:36.922 [2024-05-15 02:29:24.814075] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.922 02:29:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 89788 00:28:37.857 [2024-05-15 02:29:25.552087] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:43.121 00:28:43.121 Latency(us) 00:28:43.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.121 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:43.121 Verification LBA range: start 0x0 length 0x4000 00:28:43.121 NVMe0n1 : 10.01 4798.05 18.74 3308.69 0.00 15754.19 633.02 3019898.88 00:28:43.121 =================================================================================================================== 00:28:43.121 Total : 4798.05 18.74 3308.69 0.00 15754.19 0.00 3019898.88 00:28:43.121 0 00:28:43.121 02:29:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 89704 00:28:43.121 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 89704 ']' 00:28:43.121 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 89704 00:28:43.121 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:28:43.121 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:43.121 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89704 00:28:43.121 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:28:43.121 killing process with pid 89704 00:28:43.121 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:28:43.121 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89704' 00:28:43.121 Received shutdown signal, test time was about 10.000000 seconds 00:28:43.121 00:28:43.121 Latency(us) 00:28:43.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.122 =================================================================================================================== 00:28:43.122 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 89704 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 89704 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=89849 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 89849 /var/tmp/bdevperf.sock 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 89849 ']' 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:43.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:43.122 [2024-05-15 02:29:30.600341] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:28:43.122 [2024-05-15 02:29:30.600457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89849 ] 00:28:43.122 [2024-05-15 02:29:30.741844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.122 [2024-05-15 02:29:30.826531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=89859 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89849 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:28:43.122 02:29:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:28:43.379 02:29:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:43.947 NVMe0n1 00:28:43.947 02:29:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=89911 00:28:43.947 02:29:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:43.947 02:29:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:28:44.205 Running I/O for 10 seconds... 00:28:45.219 02:29:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:45.219 [2024-05-15 02:29:33.166407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.219 [2024-05-15 02:29:33.166496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.219 [2024-05-15 02:29:33.166511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.219 [2024-05-15 02:29:33.166521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.219 [2024-05-15 02:29:33.166532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.219 [2024-05-15 02:29:33.166541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.219 [2024-05-15 02:29:33.166551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.219 [2024-05-15 02:29:33.166561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.219 [2024-05-15 02:29:33.166570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.219 [2024-05-15 02:29:33.166579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.219 [2024-05-15 02:29:33.166588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5c00 is same with the state(5) to be set 00:28:45.220 [2024-05-15 02:29:33.166931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.166963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.166989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.220 [2024-05-15 02:29:33.167462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.220 [2024-05-15 02:29:33.167474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.167986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.167998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.168007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.168020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.168030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.168042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.168052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.168065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.168075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.168087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.168096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.168108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.221 [2024-05-15 02:29:33.168118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.221 [2024-05-15 02:29:33.168130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:68304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.222 [2024-05-15 02:29:33.168782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.222 [2024-05-15 02:29:33.168793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.168803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.168815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.168825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.168837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.168846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.168858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.168868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.168880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.168890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.168902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.168912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.168924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.168934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.168946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.168956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.168968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.168978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.168990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.169000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.169011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.169021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.169033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.169049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.169061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.169071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.169083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.169094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.169106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.169116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.169128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.169137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.169149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.169159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.169170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.169180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.169192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.169202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.169214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.169223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.169235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.169245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.169260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.169271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.169282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.223 [2024-05-15 02:29:33.169292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.223 [2024-05-15 02:29:33.169304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.224 [2024-05-15 02:29:33.169787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.224 [2024-05-15 02:29:33.169799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc20030 is same with the state(5) to be set 00:28:45.224 [2024-05-15 02:29:33.169812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.224 [2024-05-15 02:29:33.169821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.225 [2024-05-15 02:29:33.169830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115872 len:8 PRP1 0x0 PRP2 0x0 00:28:45.225 [2024-05-15 02:29:33.169839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.225 [2024-05-15 02:29:33.169887] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc20030 was disconnected and freed. reset controller. 00:28:45.225 [2024-05-15 02:29:33.170198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.225 [2024-05-15 02:29:33.170285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbaea00 (9): Bad file descriptor 00:28:45.225 [2024-05-15 02:29:33.170418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.225 [2024-05-15 02:29:33.170473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.225 [2024-05-15 02:29:33.170490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbaea00 with addr=10.0.0.2, port=4420 00:28:45.225 [2024-05-15 02:29:33.170501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaea00 is same with the state(5) to be set 00:28:45.225 [2024-05-15 02:29:33.170521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbaea00 (9): Bad file descriptor 00:28:45.225 [2024-05-15 02:29:33.170538] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.225 [2024-05-15 02:29:33.170549] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.225 [2024-05-15 02:29:33.170560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.225 [2024-05-15 02:29:33.170584] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.225 [2024-05-15 02:29:33.170597] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.225 02:29:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 89911 00:28:47.754 [2024-05-15 02:29:35.170797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.754 [2024-05-15 02:29:35.170900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.754 [2024-05-15 02:29:35.170920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbaea00 with addr=10.0.0.2, port=4420 00:28:47.754 [2024-05-15 02:29:35.170935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaea00 is same with the state(5) to be set 00:28:47.754 [2024-05-15 02:29:35.170962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbaea00 (9): Bad file descriptor 00:28:47.754 [2024-05-15 02:29:35.170995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.754 [2024-05-15 02:29:35.171008] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.754 [2024-05-15 02:29:35.171020] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.754 [2024-05-15 02:29:35.171048] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.754 [2024-05-15 02:29:35.171061] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.654 [2024-05-15 02:29:37.171238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-05-15 02:29:37.171336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-05-15 02:29:37.171357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbaea00 with addr=10.0.0.2, port=4420 00:28:49.654 [2024-05-15 02:29:37.171372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaea00 is same with the state(5) to be set 00:28:49.654 [2024-05-15 02:29:37.171411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbaea00 (9): Bad file descriptor 00:28:49.654 [2024-05-15 02:29:37.171434] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.654 [2024-05-15 02:29:37.171445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.654 [2024-05-15 02:29:37.171457] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.654 [2024-05-15 02:29:37.171485] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.654 [2024-05-15 02:29:37.171498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.582 [2024-05-15 02:29:39.171639] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.516 00:28:52.516 Latency(us) 00:28:52.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.516 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:28:52.516 NVMe0n1 : 8.20 2368.24 9.25 15.61 0.00 53639.54 2591.65 7015926.69 00:28:52.516 =================================================================================================================== 00:28:52.516 Total : 2368.24 9.25 15.61 0.00 53639.54 2591.65 7015926.69 00:28:52.516 0 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:52.516 Attaching 5 probes... 00:28:52.516 1697.264523: reset bdev controller NVMe0 00:28:52.516 1697.405143: reconnect bdev controller NVMe0 00:28:52.516 3697.716970: reconnect delay bdev controller NVMe0 00:28:52.516 3697.746558: reconnect bdev controller NVMe0 00:28:52.516 5698.166301: reconnect delay bdev controller NVMe0 00:28:52.516 5698.190822: reconnect bdev controller NVMe0 00:28:52.516 7698.664584: reconnect delay bdev controller NVMe0 00:28:52.516 7698.690596: reconnect bdev controller NVMe0 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 89859 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 89849 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 89849 ']' 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 89849 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89849 00:28:52.516 killing process with pid 89849 00:28:52.516 Received shutdown signal, test time was about 8.255629 seconds 00:28:52.516 00:28:52.516 Latency(us) 00:28:52.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.516 =================================================================================================================== 00:28:52.516 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89849' 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 89849 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 89849 00:28:52.516 02:29:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:53.082 rmmod nvme_tcp 00:28:53.082 rmmod nvme_fabrics 00:28:53.082 rmmod nvme_keyring 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 89509 ']' 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 89509 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 89509 ']' 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 89509 00:28:53.082 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:28:53.083 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:53.083 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89509 00:28:53.083 killing process with pid 89509 00:28:53.083 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:53.083 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:53.083 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89509' 00:28:53.083 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 89509 00:28:53.083 [2024-05-15 02:29:40.968627] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:53.083 02:29:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 89509 00:28:53.341 02:29:41 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:53.341 02:29:41 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:53.341 02:29:41 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:53.341 02:29:41 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:53.341 02:29:41 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:53.341 02:29:41 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.341 02:29:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:53.341 02:29:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.341 02:29:41 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:53.341 00:28:53.341 real 0m45.312s 00:28:53.341 user 2m14.254s 00:28:53.341 sys 0m4.820s 00:28:53.341 02:29:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:53.341 02:29:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:53.341 ************************************ 00:28:53.341 END TEST nvmf_timeout 00:28:53.341 ************************************ 00:28:53.341 02:29:41 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:28:53.341 02:29:41 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:28:53.341 02:29:41 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.341 02:29:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:53.341 02:29:41 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:53.341 ************************************ 00:28:53.341 END TEST nvmf_tcp 00:28:53.341 ************************************ 00:28:53.341 00:28:53.341 real 15m30.306s 00:28:53.341 user 42m6.054s 00:28:53.341 sys 3m15.411s 00:28:53.341 02:29:41 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:53.341 02:29:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:53.341 02:29:41 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:28:53.341 02:29:41 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:53.341 02:29:41 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:53.341 02:29:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:53.341 02:29:41 -- common/autotest_common.sh@10 -- # set +x 00:28:53.341 ************************************ 00:28:53.341 START TEST spdkcli_nvmf_tcp 00:28:53.341 ************************************ 00:28:53.341 02:29:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:53.599 * Looking for test storage... 00:28:53.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:28:53.599 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=90072 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 90072 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 90072 ']' 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:53.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:53.600 02:29:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:53.600 [2024-05-15 02:29:41.483242] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:28:53.600 [2024-05-15 02:29:41.483353] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90072 ] 00:28:53.858 [2024-05-15 02:29:41.619981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:53.858 [2024-05-15 02:29:41.680859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.858 [2024-05-15 02:29:41.680873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.858 02:29:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:53.858 02:29:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:28:53.858 02:29:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:53.858 02:29:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.858 02:29:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:53.858 02:29:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:53.858 02:29:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:53.858 02:29:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:53.858 02:29:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:53.858 02:29:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:53.858 02:29:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:53.858 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:53.858 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:53.858 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:53.858 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:53.858 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:53.858 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:53.858 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:53.858 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:53.858 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:53.858 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:53.858 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:53.858 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:53.858 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:53.858 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:53.858 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:53.858 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:53.858 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:53.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:53.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:53.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:53.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:53.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:53.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:53.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:53.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:53.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:53.859 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:53.859 ' 00:28:57.167 [2024-05-15 02:29:44.443235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.734 [2024-05-15 02:29:45.712018] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:57.734 [2024-05-15 02:29:45.712316] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:00.262 [2024-05-15 02:29:48.061764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:02.163 [2024-05-15 02:29:50.075110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:04.065 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:04.065 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:04.065 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:04.065 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:04.065 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:04.065 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:04.065 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:04.065 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:04.065 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:04.065 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:04.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:04.065 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:04.065 02:29:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:04.065 02:29:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:04.065 02:29:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:04.065 02:29:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:04.065 02:29:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:04.065 02:29:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:04.065 02:29:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:04.065 02:29:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:29:04.324 02:29:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:04.324 02:29:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:04.324 02:29:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:04.324 02:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:04.324 02:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:04.582 02:29:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:04.582 02:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:04.582 02:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:04.582 02:29:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:04.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:04.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:04.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:04.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:04.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:04.582 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:04.583 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:04.583 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:04.583 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:04.583 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:04.583 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:04.583 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:04.583 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:04.583 ' 00:29:09.846 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:09.846 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:09.846 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:09.846 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:09.846 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:09.846 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:09.846 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:09.846 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:09.846 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:09.846 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:09.846 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:09.846 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:09.846 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:09.846 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:09.846 02:29:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:09.846 02:29:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.846 02:29:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:09.846 02:29:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 90072 00:29:09.846 02:29:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 90072 ']' 00:29:09.846 02:29:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 90072 00:29:09.846 02:29:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:29:09.846 02:29:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:09.846 02:29:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90072 00:29:09.846 02:29:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:09.846 02:29:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:09.846 killing process with pid 90072 00:29:09.846 02:29:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90072' 00:29:09.846 02:29:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 90072 00:29:09.846 [2024-05-15 02:29:57.844849] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:09.846 02:29:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 90072 00:29:10.104 02:29:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:10.104 02:29:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:10.104 02:29:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 90072 ']' 00:29:10.104 02:29:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 90072 00:29:10.104 02:29:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 90072 ']' 00:29:10.104 02:29:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 90072 00:29:10.104 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (90072) - No such process 00:29:10.104 Process with pid 90072 is not found 00:29:10.104 02:29:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 90072 is not found' 00:29:10.104 02:29:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:10.104 02:29:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:10.104 02:29:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:10.104 00:29:10.104 real 0m16.718s 00:29:10.104 user 0m36.167s 00:29:10.104 sys 0m0.802s 00:29:10.104 02:29:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:10.104 ************************************ 00:29:10.104 END TEST spdkcli_nvmf_tcp 00:29:10.104 ************************************ 00:29:10.104 02:29:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:10.104 02:29:58 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:10.104 02:29:58 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:10.104 02:29:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:10.104 02:29:58 -- common/autotest_common.sh@10 -- # set +x 00:29:10.104 ************************************ 00:29:10.104 START TEST nvmf_identify_passthru 00:29:10.104 ************************************ 00:29:10.104 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:10.363 * Looking for test storage... 00:29:10.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:10.363 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:10.363 02:29:58 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.363 02:29:58 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.363 02:29:58 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.363 02:29:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.363 02:29:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.363 02:29:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.363 02:29:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:10.363 02:29:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:10.363 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:10.363 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:10.363 02:29:58 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.363 02:29:58 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.363 02:29:58 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.363 02:29:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.363 02:29:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.363 02:29:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.363 02:29:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:10.364 02:29:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.364 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.364 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:10.364 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:10.364 Cannot find device "nvmf_tgt_br" 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:10.364 Cannot find device "nvmf_tgt_br2" 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:10.364 Cannot find device "nvmf_tgt_br" 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:10.364 Cannot find device "nvmf_tgt_br2" 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:10.364 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:10.364 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:10.364 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:10.622 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:10.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:29:10.622 00:29:10.622 --- 10.0.0.2 ping statistics --- 00:29:10.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.623 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:29:10.623 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:10.623 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:10.623 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:29:10.623 00:29:10.623 --- 10.0.0.3 ping statistics --- 00:29:10.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.623 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:29:10.623 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:10.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:29:10.623 00:29:10.623 --- 10.0.0.1 ping statistics --- 00:29:10.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.623 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:29:10.623 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.623 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:29:10.623 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:10.623 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.623 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:10.623 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:10.623 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.623 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:10.623 02:29:58 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:10.623 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:10.623 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:10.623 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:10.623 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:10.623 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:29:10.623 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:29:10.623 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:29:10.623 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:29:10.623 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:10.623 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:29:10.623 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:10.623 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:10.623 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:10.623 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:29:10.623 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:10.623 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:29:10.623 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:29:10.623 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:29:10.623 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:10.623 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:10.623 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:10.881 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:29:10.881 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:10.881 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:10.881 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:11.140 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:29:11.140 02:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:11.140 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.140 02:29:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:11.140 02:29:59 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:11.140 02:29:59 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:11.140 02:29:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:11.140 02:29:59 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=90441 00:29:11.140 02:29:59 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:11.140 02:29:59 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:11.140 02:29:59 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 90441 00:29:11.140 02:29:59 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 90441 ']' 00:29:11.140 02:29:59 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.140 02:29:59 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:11.140 02:29:59 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.140 02:29:59 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:11.140 02:29:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:11.140 [2024-05-15 02:29:59.077412] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:29:11.140 [2024-05-15 02:29:59.077501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.398 [2024-05-15 02:29:59.219349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:11.398 [2024-05-15 02:29:59.323146] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.398 [2024-05-15 02:29:59.323227] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.398 [2024-05-15 02:29:59.323254] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.398 [2024-05-15 02:29:59.323272] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.398 [2024-05-15 02:29:59.323287] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.398 [2024-05-15 02:29:59.323472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.398 [2024-05-15 02:29:59.323612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.398 [2024-05-15 02:29:59.323795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.398 [2024-05-15 02:29:59.323807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.331 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:12.331 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:29:12.331 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:12.331 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.331 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:12.331 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.331 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:12.331 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.331 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:12.590 [2024-05-15 02:30:00.359773] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.590 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:12.590 [2024-05-15 02:30:00.373581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.590 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:12.590 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:12.590 Nvme0n1 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.590 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.590 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.590 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:12.590 [2024-05-15 02:30:00.503643] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:12.590 [2024-05-15 02:30:00.504299] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.590 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:12.590 [ 00:29:12.590 { 00:29:12.590 "allow_any_host": true, 00:29:12.590 "hosts": [], 00:29:12.590 "listen_addresses": [], 00:29:12.590 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:12.590 "subtype": "Discovery" 00:29:12.590 }, 00:29:12.590 { 00:29:12.590 "allow_any_host": true, 00:29:12.590 "hosts": [], 00:29:12.590 "listen_addresses": [ 00:29:12.590 { 00:29:12.590 "adrfam": "IPv4", 00:29:12.590 "traddr": "10.0.0.2", 00:29:12.590 "trsvcid": "4420", 00:29:12.590 "trtype": "TCP" 00:29:12.590 } 00:29:12.590 ], 00:29:12.590 "max_cntlid": 65519, 00:29:12.590 "max_namespaces": 1, 00:29:12.590 "min_cntlid": 1, 00:29:12.590 "model_number": "SPDK bdev Controller", 00:29:12.590 "namespaces": [ 00:29:12.590 { 00:29:12.590 "bdev_name": "Nvme0n1", 00:29:12.590 "name": "Nvme0n1", 00:29:12.590 "nguid": "9E8EEAE925D2451C8AD69695AFE52896", 00:29:12.590 "nsid": 1, 00:29:12.590 "uuid": "9e8eeae9-25d2-451c-8ad6-9695afe52896" 00:29:12.590 } 00:29:12.590 ], 00:29:12.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:12.590 "serial_number": "SPDK00000000000001", 00:29:12.590 "subtype": "NVMe" 00:29:12.590 } 00:29:12.590 ] 00:29:12.590 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.590 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:12.590 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:12.590 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:12.849 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:29:12.849 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:12.849 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:12.849 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:13.107 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:29:13.107 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:29:13.107 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:29:13.107 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:13.107 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.107 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:13.107 02:30:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.107 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:13.107 02:30:00 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:13.107 02:30:00 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:13.107 02:30:00 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:13.107 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:13.107 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:13.107 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:13.107 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:13.107 rmmod nvme_tcp 00:29:13.107 rmmod nvme_fabrics 00:29:13.107 rmmod nvme_keyring 00:29:13.107 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:13.107 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:13.107 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:13.107 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 90441 ']' 00:29:13.107 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 90441 00:29:13.107 02:30:01 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 90441 ']' 00:29:13.107 02:30:01 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 90441 00:29:13.107 02:30:01 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:29:13.107 02:30:01 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:13.107 02:30:01 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90441 00:29:13.107 02:30:01 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:13.107 killing process with pid 90441 00:29:13.107 02:30:01 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:13.107 02:30:01 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90441' 00:29:13.107 02:30:01 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 90441 00:29:13.107 [2024-05-15 02:30:01.103080] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:13.107 02:30:01 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 90441 00:29:13.365 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:13.365 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:13.365 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:13.365 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:13.365 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:13.365 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.365 02:30:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:13.365 02:30:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.365 02:30:01 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:13.365 00:29:13.365 real 0m3.261s 00:29:13.365 user 0m8.455s 00:29:13.365 sys 0m0.770s 00:29:13.365 02:30:01 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:13.365 02:30:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:13.365 ************************************ 00:29:13.365 END TEST nvmf_identify_passthru 00:29:13.365 ************************************ 00:29:13.624 02:30:01 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:13.624 02:30:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:13.624 02:30:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:13.624 02:30:01 -- common/autotest_common.sh@10 -- # set +x 00:29:13.624 ************************************ 00:29:13.624 START TEST nvmf_dif 00:29:13.624 ************************************ 00:29:13.624 02:30:01 nvmf_dif -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:13.624 * Looking for test storage... 00:29:13.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:13.624 02:30:01 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:13.624 02:30:01 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:13.624 02:30:01 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.624 02:30:01 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.624 02:30:01 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.624 02:30:01 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.624 02:30:01 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.624 02:30:01 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:13.624 02:30:01 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:13.624 02:30:01 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:13.624 02:30:01 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:13.624 02:30:01 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:13.624 02:30:01 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:13.624 02:30:01 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.624 02:30:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:13.624 02:30:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:13.624 Cannot find device "nvmf_tgt_br" 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@155 -- # true 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:13.624 Cannot find device "nvmf_tgt_br2" 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@156 -- # true 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:13.624 Cannot find device "nvmf_tgt_br" 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@158 -- # true 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:13.624 Cannot find device "nvmf_tgt_br2" 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@159 -- # true 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:13.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@162 -- # true 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:13.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@163 -- # true 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:13.624 02:30:01 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:13.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:29:13.882 00:29:13.882 --- 10.0.0.2 ping statistics --- 00:29:13.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.882 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:13.882 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:13.882 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:29:13.882 00:29:13.882 --- 10.0.0.3 ping statistics --- 00:29:13.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.882 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:13.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:29:13.882 00:29:13.882 --- 10.0.0.1 ping statistics --- 00:29:13.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.882 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:13.882 02:30:01 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:14.141 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:14.141 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:14.141 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:14.141 02:30:02 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.141 02:30:02 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:14.141 02:30:02 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:14.141 02:30:02 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.141 02:30:02 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:14.141 02:30:02 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:14.141 02:30:02 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:14.141 02:30:02 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:14.141 02:30:02 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:14.141 02:30:02 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:14.141 02:30:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:14.398 02:30:02 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=90769 00:29:14.398 02:30:02 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 90769 00:29:14.398 02:30:02 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:14.398 02:30:02 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 90769 ']' 00:29:14.398 02:30:02 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.398 02:30:02 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:14.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.398 02:30:02 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.398 02:30:02 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:14.398 02:30:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:14.398 [2024-05-15 02:30:02.230648] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:29:14.398 [2024-05-15 02:30:02.230780] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.398 [2024-05-15 02:30:02.377833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.656 [2024-05-15 02:30:02.467987] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.656 [2024-05-15 02:30:02.468274] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.656 [2024-05-15 02:30:02.468569] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.656 [2024-05-15 02:30:02.468594] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.656 [2024-05-15 02:30:02.468608] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.656 [2024-05-15 02:30:02.468653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.245 02:30:03 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:15.245 02:30:03 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:29:15.245 02:30:03 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:15.245 02:30:03 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:15.245 02:30:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:15.503 02:30:03 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.503 02:30:03 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:15.503 02:30:03 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:15.503 02:30:03 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.503 02:30:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:15.503 [2024-05-15 02:30:03.285664] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.503 02:30:03 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.503 02:30:03 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:15.503 02:30:03 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:15.503 02:30:03 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:15.503 02:30:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:15.503 ************************************ 00:29:15.503 START TEST fio_dif_1_default 00:29:15.503 ************************************ 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:15.503 bdev_null0 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:15.503 [2024-05-15 02:30:03.329592] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:15.503 [2024-05-15 02:30:03.329846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:15.503 { 00:29:15.503 "params": { 00:29:15.503 "name": "Nvme$subsystem", 00:29:15.503 "trtype": "$TEST_TRANSPORT", 00:29:15.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.503 "adrfam": "ipv4", 00:29:15.503 "trsvcid": "$NVMF_PORT", 00:29:15.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.503 "hdgst": ${hdgst:-false}, 00:29:15.503 "ddgst": ${ddgst:-false} 00:29:15.503 }, 00:29:15.503 "method": "bdev_nvme_attach_controller" 00:29:15.503 } 00:29:15.503 EOF 00:29:15.503 )") 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:15.503 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:15.504 "params": { 00:29:15.504 "name": "Nvme0", 00:29:15.504 "trtype": "tcp", 00:29:15.504 "traddr": "10.0.0.2", 00:29:15.504 "adrfam": "ipv4", 00:29:15.504 "trsvcid": "4420", 00:29:15.504 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.504 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:15.504 "hdgst": false, 00:29:15.504 "ddgst": false 00:29:15.504 }, 00:29:15.504 "method": "bdev_nvme_attach_controller" 00:29:15.504 }' 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:15.504 02:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:15.761 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:15.761 fio-3.35 00:29:15.761 Starting 1 thread 00:29:27.956 00:29:27.956 filename0: (groupid=0, jobs=1): err= 0: pid=90843: Wed May 15 02:30:14 2024 00:29:27.956 read: IOPS=1200, BW=4801KiB/s (4916kB/s)(46.9MiB/10002msec) 00:29:27.956 slat (nsec): min=6762, max=67614, avg=9934.49, stdev=5454.48 00:29:27.956 clat (usec): min=454, max=42734, avg=3302.22, stdev=10064.33 00:29:27.956 lat (usec): min=462, max=42754, avg=3312.15, stdev=10065.38 00:29:27.957 clat percentiles (usec): 00:29:27.957 | 1.00th=[ 461], 5.00th=[ 469], 10.00th=[ 474], 20.00th=[ 486], 00:29:27.957 | 30.00th=[ 490], 40.00th=[ 502], 50.00th=[ 515], 60.00th=[ 586], 00:29:27.957 | 70.00th=[ 635], 80.00th=[ 685], 90.00th=[ 1614], 95.00th=[40633], 00:29:27.957 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[42206], 00:29:27.957 | 99.99th=[42730] 00:29:27.957 bw ( KiB/s): min= 608, max=13760, per=100.00%, avg=4946.53, stdev=3901.00, samples=19 00:29:27.957 iops : min= 152, max= 3440, avg=1236.63, stdev=975.25, samples=19 00:29:27.957 lat (usec) : 500=39.80%, 750=45.52%, 1000=1.18% 00:29:27.957 lat (msec) : 2=6.83%, 4=0.03%, 50=6.63% 00:29:27.957 cpu : usr=89.61%, sys=9.23%, ctx=17, majf=0, minf=9 00:29:27.957 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:27.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:27.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:27.957 issued rwts: total=12004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:27.957 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:27.957 00:29:27.957 Run status group 0 (all jobs): 00:29:27.957 READ: bw=4801KiB/s (4916kB/s), 4801KiB/s-4801KiB/s (4916kB/s-4916kB/s), io=46.9MiB (49.2MB), run=10002-10002msec 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:27.957 ************************************ 00:29:27.957 END TEST fio_dif_1_default 00:29:27.957 ************************************ 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.957 00:29:27.957 real 0m10.943s 00:29:27.957 user 0m9.582s 00:29:27.957 sys 0m1.147s 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:27.957 02:30:14 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:27.957 02:30:14 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:27.957 02:30:14 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:27.957 02:30:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:27.957 ************************************ 00:29:27.957 START TEST fio_dif_1_multi_subsystems 00:29:27.957 ************************************ 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.957 bdev_null0 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.957 [2024-05-15 02:30:14.321221] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.957 bdev_null1 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:27.957 { 00:29:27.957 "params": { 00:29:27.957 "name": "Nvme$subsystem", 00:29:27.957 "trtype": "$TEST_TRANSPORT", 00:29:27.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.957 "adrfam": "ipv4", 00:29:27.957 "trsvcid": "$NVMF_PORT", 00:29:27.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.957 "hdgst": ${hdgst:-false}, 00:29:27.957 "ddgst": ${ddgst:-false} 00:29:27.957 }, 00:29:27.957 "method": "bdev_nvme_attach_controller" 00:29:27.957 } 00:29:27.957 EOF 00:29:27.957 )") 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:27.957 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:27.957 { 00:29:27.957 "params": { 00:29:27.957 "name": "Nvme$subsystem", 00:29:27.957 "trtype": "$TEST_TRANSPORT", 00:29:27.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.957 "adrfam": "ipv4", 00:29:27.957 "trsvcid": "$NVMF_PORT", 00:29:27.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.958 "hdgst": ${hdgst:-false}, 00:29:27.958 "ddgst": ${ddgst:-false} 00:29:27.958 }, 00:29:27.958 "method": "bdev_nvme_attach_controller" 00:29:27.958 } 00:29:27.958 EOF 00:29:27.958 )") 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:27.958 "params": { 00:29:27.958 "name": "Nvme0", 00:29:27.958 "trtype": "tcp", 00:29:27.958 "traddr": "10.0.0.2", 00:29:27.958 "adrfam": "ipv4", 00:29:27.958 "trsvcid": "4420", 00:29:27.958 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:27.958 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:27.958 "hdgst": false, 00:29:27.958 "ddgst": false 00:29:27.958 }, 00:29:27.958 "method": "bdev_nvme_attach_controller" 00:29:27.958 },{ 00:29:27.958 "params": { 00:29:27.958 "name": "Nvme1", 00:29:27.958 "trtype": "tcp", 00:29:27.958 "traddr": "10.0.0.2", 00:29:27.958 "adrfam": "ipv4", 00:29:27.958 "trsvcid": "4420", 00:29:27.958 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.958 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:27.958 "hdgst": false, 00:29:27.958 "ddgst": false 00:29:27.958 }, 00:29:27.958 "method": "bdev_nvme_attach_controller" 00:29:27.958 }' 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:27.958 02:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:27.958 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:27.958 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:27.958 fio-3.35 00:29:27.958 Starting 2 threads 00:29:37.925 00:29:37.925 filename0: (groupid=0, jobs=1): err= 0: pid=90933: Wed May 15 02:30:25 2024 00:29:37.925 read: IOPS=549, BW=2198KiB/s (2251kB/s)(21.5MiB/10037msec) 00:29:37.925 slat (nsec): min=7800, max=74400, avg=11716.18, stdev=7736.84 00:29:37.925 clat (usec): min=472, max=42925, avg=7240.16, stdev=14740.89 00:29:37.925 lat (usec): min=480, max=42953, avg=7251.88, stdev=14742.35 00:29:37.925 clat percentiles (usec): 00:29:37.925 | 1.00th=[ 529], 5.00th=[ 603], 10.00th=[ 644], 20.00th=[ 660], 00:29:37.925 | 30.00th=[ 685], 40.00th=[ 701], 50.00th=[ 742], 60.00th=[ 840], 00:29:37.925 | 70.00th=[ 1172], 80.00th=[ 1598], 90.00th=[41157], 95.00th=[41157], 00:29:37.925 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:29:37.925 | 99.99th=[42730] 00:29:37.925 bw ( KiB/s): min= 512, max= 6624, per=47.35%, avg=2204.80, stdev=1745.72, samples=20 00:29:37.925 iops : min= 128, max= 1656, avg=551.20, stdev=436.43, samples=20 00:29:37.925 lat (usec) : 500=0.24%, 750=52.25%, 1000=14.52% 00:29:37.925 lat (msec) : 2=16.39%, 4=0.80%, 50=15.81% 00:29:37.925 cpu : usr=93.21%, sys=5.65%, ctx=9, majf=0, minf=9 00:29:37.925 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:37.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.925 issued rwts: total=5516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:37.925 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:37.925 filename1: (groupid=0, jobs=1): err= 0: pid=90934: Wed May 15 02:30:25 2024 00:29:37.925 read: IOPS=615, BW=2462KiB/s (2521kB/s)(24.1MiB/10015msec) 00:29:37.925 slat (nsec): min=7818, max=65729, avg=11547.96, stdev=7132.84 00:29:37.925 clat (usec): min=470, max=42946, avg=6462.77, stdev=14022.97 00:29:37.925 lat (usec): min=478, max=43002, avg=6474.32, stdev=14024.84 00:29:37.925 clat percentiles (usec): 00:29:37.925 | 1.00th=[ 510], 5.00th=[ 594], 10.00th=[ 611], 20.00th=[ 635], 00:29:37.925 | 30.00th=[ 652], 40.00th=[ 668], 50.00th=[ 701], 60.00th=[ 775], 00:29:37.926 | 70.00th=[ 979], 80.00th=[ 1565], 90.00th=[41157], 95.00th=[41157], 00:29:37.926 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:29:37.926 | 99.99th=[42730] 00:29:37.926 bw ( KiB/s): min= 512, max=12992, per=52.91%, avg=2464.00, stdev=3212.61, samples=20 00:29:37.926 iops : min= 128, max= 3248, avg=616.00, stdev=803.15, samples=20 00:29:37.926 lat (usec) : 500=0.47%, 750=57.14%, 1000=12.62% 00:29:37.926 lat (msec) : 2=15.43%, 4=0.39%, 50=13.95% 00:29:37.926 cpu : usr=93.91%, sys=4.99%, ctx=10, majf=0, minf=0 00:29:37.926 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:37.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.926 issued rwts: total=6164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:37.926 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:37.926 00:29:37.926 Run status group 0 (all jobs): 00:29:37.926 READ: bw=4655KiB/s (4766kB/s), 2198KiB/s-2462KiB/s (2251kB/s-2521kB/s), io=45.6MiB (47.8MB), run=10015-10037msec 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.926 00:29:37.926 real 0m11.137s 00:29:37.926 user 0m19.526s 00:29:37.926 sys 0m1.285s 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:37.926 02:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.926 ************************************ 00:29:37.926 END TEST fio_dif_1_multi_subsystems 00:29:37.926 ************************************ 00:29:37.926 02:30:25 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:37.926 02:30:25 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:37.926 02:30:25 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:37.926 02:30:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:37.926 ************************************ 00:29:37.926 START TEST fio_dif_rand_params 00:29:37.926 ************************************ 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:37.926 bdev_null0 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:37.926 [2024-05-15 02:30:25.503233] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:37.926 { 00:29:37.926 "params": { 00:29:37.926 "name": "Nvme$subsystem", 00:29:37.926 "trtype": "$TEST_TRANSPORT", 00:29:37.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.926 "adrfam": "ipv4", 00:29:37.926 "trsvcid": "$NVMF_PORT", 00:29:37.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.926 "hdgst": ${hdgst:-false}, 00:29:37.926 "ddgst": ${ddgst:-false} 00:29:37.926 }, 00:29:37.926 "method": "bdev_nvme_attach_controller" 00:29:37.926 } 00:29:37.926 EOF 00:29:37.926 )") 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:37.926 "params": { 00:29:37.926 "name": "Nvme0", 00:29:37.926 "trtype": "tcp", 00:29:37.926 "traddr": "10.0.0.2", 00:29:37.926 "adrfam": "ipv4", 00:29:37.926 "trsvcid": "4420", 00:29:37.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:37.926 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:37.926 "hdgst": false, 00:29:37.926 "ddgst": false 00:29:37.926 }, 00:29:37.926 "method": "bdev_nvme_attach_controller" 00:29:37.926 }' 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:37.926 02:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:37.926 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:37.926 ... 00:29:37.926 fio-3.35 00:29:37.926 Starting 3 threads 00:29:44.486 00:29:44.486 filename0: (groupid=0, jobs=1): err= 0: pid=91023: Wed May 15 02:30:31 2024 00:29:44.486 read: IOPS=92, BW=11.5MiB/s (12.1MB/s)(57.8MiB/5016msec) 00:29:44.486 slat (nsec): min=7805, max=76633, avg=30366.93, stdev=13811.61 00:29:44.486 clat (usec): min=18044, max=40735, avg=32501.03, stdev=3047.19 00:29:44.486 lat (usec): min=18060, max=40765, avg=32531.40, stdev=3048.03 00:29:44.486 clat percentiles (usec): 00:29:44.486 | 1.00th=[25560], 5.00th=[27919], 10.00th=[28705], 20.00th=[30278], 00:29:44.486 | 30.00th=[31065], 40.00th=[31589], 50.00th=[32375], 60.00th=[33162], 00:29:44.486 | 70.00th=[34341], 80.00th=[34866], 90.00th=[36439], 95.00th=[37487], 00:29:44.486 | 99.00th=[40109], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:29:44.486 | 99.99th=[40633] 00:29:44.486 bw ( KiB/s): min=11264, max=12288, per=35.01%, avg=11747.90, stdev=486.46, samples=10 00:29:44.486 iops : min= 88, max= 96, avg=91.70, stdev= 3.71, samples=10 00:29:44.486 lat (msec) : 20=0.43%, 50=99.57% 00:29:44.486 cpu : usr=92.48%, sys=5.98%, ctx=49, majf=0, minf=0 00:29:44.486 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:44.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.486 issued rwts: total=462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.486 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:44.486 filename0: (groupid=0, jobs=1): err= 0: pid=91024: Wed May 15 02:30:31 2024 00:29:44.486 read: IOPS=70, BW=9035KiB/s (9252kB/s)(44.2MiB/5015msec) 00:29:44.486 slat (nsec): min=8027, max=72636, avg=23247.97, stdev=12005.94 00:29:44.486 clat (usec): min=31102, max=49157, avg=42437.53, stdev=3317.54 00:29:44.486 lat (usec): min=31118, max=49181, avg=42460.77, stdev=3316.72 00:29:44.486 clat percentiles (usec): 00:29:44.486 | 1.00th=[35914], 5.00th=[37487], 10.00th=[38011], 20.00th=[39584], 00:29:44.486 | 30.00th=[40633], 40.00th=[41681], 50.00th=[42730], 60.00th=[43779], 00:29:44.486 | 70.00th=[44303], 80.00th=[45351], 90.00th=[46924], 95.00th=[47449], 00:29:44.486 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:29:44.486 | 99.99th=[49021] 00:29:44.486 bw ( KiB/s): min= 8448, max= 9984, per=26.78%, avg=8985.60, stdev=518.36, samples=10 00:29:44.486 iops : min= 66, max= 78, avg=70.20, stdev= 4.05, samples=10 00:29:44.486 lat (msec) : 50=100.00% 00:29:44.486 cpu : usr=92.80%, sys=5.90%, ctx=8, majf=0, minf=0 00:29:44.486 IO depths : 1=32.5%, 2=67.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:44.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.486 issued rwts: total=354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.486 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:44.486 filename0: (groupid=0, jobs=1): err= 0: pid=91025: Wed May 15 02:30:31 2024 00:29:44.486 read: IOPS=99, BW=12.4MiB/s (13.0MB/s)(62.4MiB/5016msec) 00:29:44.486 slat (nsec): min=8101, max=69943, avg=31115.64, stdev=12546.12 00:29:44.486 clat (usec): min=17983, max=38127, avg=30058.51, stdev=2986.41 00:29:44.486 lat (usec): min=18045, max=38158, avg=30089.63, stdev=2986.07 00:29:44.486 clat percentiles (usec): 00:29:44.486 | 1.00th=[23725], 5.00th=[25035], 10.00th=[26084], 20.00th=[27657], 00:29:44.486 | 30.00th=[28443], 40.00th=[29230], 50.00th=[30016], 60.00th=[30540], 00:29:44.486 | 70.00th=[31589], 80.00th=[32637], 90.00th=[34341], 95.00th=[34866], 00:29:44.486 | 99.00th=[36439], 99.50th=[36963], 99.90th=[38011], 99.95th=[38011], 00:29:44.486 | 99.99th=[38011] 00:29:44.486 bw ( KiB/s): min=11776, max=13824, per=37.98%, avg=12746.10, stdev=760.17, samples=10 00:29:44.486 iops : min= 92, max= 108, avg=99.50, stdev= 5.91, samples=10 00:29:44.486 lat (msec) : 20=0.20%, 50=99.80% 00:29:44.486 cpu : usr=91.88%, sys=6.44%, ctx=6, majf=0, minf=3 00:29:44.486 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:44.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.486 issued rwts: total=499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.486 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:44.486 00:29:44.486 Run status group 0 (all jobs): 00:29:44.486 READ: bw=32.8MiB/s (34.4MB/s), 9035KiB/s-12.4MiB/s (9252kB/s-13.0MB/s), io=164MiB (172MB), run=5015-5016msec 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.486 bdev_null0 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.486 [2024-05-15 02:30:31.447013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.486 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.486 bdev_null1 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.487 bdev_null2 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.487 { 00:29:44.487 "params": { 00:29:44.487 "name": "Nvme$subsystem", 00:29:44.487 "trtype": "$TEST_TRANSPORT", 00:29:44.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.487 "adrfam": "ipv4", 00:29:44.487 "trsvcid": "$NVMF_PORT", 00:29:44.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.487 "hdgst": ${hdgst:-false}, 00:29:44.487 "ddgst": ${ddgst:-false} 00:29:44.487 }, 00:29:44.487 "method": "bdev_nvme_attach_controller" 00:29:44.487 } 00:29:44.487 EOF 00:29:44.487 )") 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.487 { 00:29:44.487 "params": { 00:29:44.487 "name": "Nvme$subsystem", 00:29:44.487 "trtype": "$TEST_TRANSPORT", 00:29:44.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.487 "adrfam": "ipv4", 00:29:44.487 "trsvcid": "$NVMF_PORT", 00:29:44.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.487 "hdgst": ${hdgst:-false}, 00:29:44.487 "ddgst": ${ddgst:-false} 00:29:44.487 }, 00:29:44.487 "method": "bdev_nvme_attach_controller" 00:29:44.487 } 00:29:44.487 EOF 00:29:44.487 )") 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.487 { 00:29:44.487 "params": { 00:29:44.487 "name": "Nvme$subsystem", 00:29:44.487 "trtype": "$TEST_TRANSPORT", 00:29:44.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.487 "adrfam": "ipv4", 00:29:44.487 "trsvcid": "$NVMF_PORT", 00:29:44.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.487 "hdgst": ${hdgst:-false}, 00:29:44.487 "ddgst": ${ddgst:-false} 00:29:44.487 }, 00:29:44.487 "method": "bdev_nvme_attach_controller" 00:29:44.487 } 00:29:44.487 EOF 00:29:44.487 )") 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:44.487 "params": { 00:29:44.487 "name": "Nvme0", 00:29:44.487 "trtype": "tcp", 00:29:44.487 "traddr": "10.0.0.2", 00:29:44.487 "adrfam": "ipv4", 00:29:44.487 "trsvcid": "4420", 00:29:44.487 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:44.487 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:44.487 "hdgst": false, 00:29:44.487 "ddgst": false 00:29:44.487 }, 00:29:44.487 "method": "bdev_nvme_attach_controller" 00:29:44.487 },{ 00:29:44.487 "params": { 00:29:44.487 "name": "Nvme1", 00:29:44.487 "trtype": "tcp", 00:29:44.487 "traddr": "10.0.0.2", 00:29:44.487 "adrfam": "ipv4", 00:29:44.487 "trsvcid": "4420", 00:29:44.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:44.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:44.487 "hdgst": false, 00:29:44.487 "ddgst": false 00:29:44.487 }, 00:29:44.487 "method": "bdev_nvme_attach_controller" 00:29:44.487 },{ 00:29:44.487 "params": { 00:29:44.487 "name": "Nvme2", 00:29:44.487 "trtype": "tcp", 00:29:44.487 "traddr": "10.0.0.2", 00:29:44.487 "adrfam": "ipv4", 00:29:44.487 "trsvcid": "4420", 00:29:44.487 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:44.487 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:44.487 "hdgst": false, 00:29:44.487 "ddgst": false 00:29:44.487 }, 00:29:44.487 "method": "bdev_nvme_attach_controller" 00:29:44.487 }' 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:44.487 02:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:44.488 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:44.488 ... 00:29:44.488 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:44.488 ... 00:29:44.488 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:44.488 ... 00:29:44.488 fio-3.35 00:29:44.488 Starting 24 threads 00:29:56.680 00:29:56.680 filename0: (groupid=0, jobs=1): err= 0: pid=91081: Wed May 15 02:30:42 2024 00:29:56.680 read: IOPS=90, BW=361KiB/s (369kB/s)(3680KiB/10200msec) 00:29:56.680 slat (usec): min=4, max=8097, avg=44.94, stdev=397.83 00:29:56.680 clat (msec): min=6, max=531, avg=176.59, stdev=105.85 00:29:56.680 lat (msec): min=6, max=531, avg=176.63, stdev=105.85 00:29:56.680 clat percentiles (msec): 00:29:56.680 | 1.00th=[ 7], 5.00th=[ 13], 10.00th=[ 35], 20.00th=[ 72], 00:29:56.680 | 30.00th=[ 102], 40.00th=[ 146], 50.00th=[ 171], 60.00th=[ 207], 00:29:56.680 | 70.00th=[ 228], 80.00th=[ 292], 90.00th=[ 317], 95.00th=[ 347], 00:29:56.681 | 99.00th=[ 363], 99.50th=[ 531], 99.90th=[ 531], 99.95th=[ 531], 00:29:56.681 | 99.99th=[ 531] 00:29:56.681 bw ( KiB/s): min= 176, max= 1408, per=5.39%, avg=361.45, stdev=282.95, samples=20 00:29:56.681 iops : min= 44, max= 352, avg=90.35, stdev=70.74, samples=20 00:29:56.681 lat (msec) : 10=3.48%, 20=3.48%, 50=3.48%, 100=18.70%, 250=43.80% 00:29:56.681 lat (msec) : 500=26.52%, 750=0.54% 00:29:56.681 cpu : usr=38.49%, sys=1.61%, ctx=1069, majf=0, minf=9 00:29:56.681 IO depths : 1=1.0%, 2=3.0%, 4=12.2%, 8=72.1%, 16=11.7%, 32=0.0%, >=64=0.0% 00:29:56.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 complete : 0=0.0%, 4=90.5%, 8=4.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 issued rwts: total=920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.681 filename0: (groupid=0, jobs=1): err= 0: pid=91082: Wed May 15 02:30:42 2024 00:29:56.681 read: IOPS=69, BW=277KiB/s (284kB/s)(2816KiB/10171msec) 00:29:56.681 slat (usec): min=7, max=3637, avg=38.10, stdev=156.83 00:29:56.681 clat (msec): min=16, max=465, avg=230.86, stdev=121.68 00:29:56.681 lat (msec): min=16, max=465, avg=230.90, stdev=121.67 00:29:56.681 clat percentiles (msec): 00:29:56.681 | 1.00th=[ 17], 5.00th=[ 29], 10.00th=[ 78], 20.00th=[ 104], 00:29:56.681 | 30.00th=[ 155], 40.00th=[ 178], 50.00th=[ 251], 60.00th=[ 292], 00:29:56.681 | 70.00th=[ 305], 80.00th=[ 338], 90.00th=[ 363], 95.00th=[ 439], 00:29:56.681 | 99.00th=[ 464], 99.50th=[ 468], 99.90th=[ 468], 99.95th=[ 468], 00:29:56.681 | 99.99th=[ 468] 00:29:56.681 bw ( KiB/s): min= 128, max= 1026, per=4.11%, avg=275.30, stdev=200.75, samples=20 00:29:56.681 iops : min= 32, max= 256, avg=68.80, stdev=50.09, samples=20 00:29:56.681 lat (msec) : 20=2.27%, 50=4.55%, 100=8.81%, 250=32.81%, 500=51.56% 00:29:56.681 cpu : usr=36.84%, sys=1.70%, ctx=1079, majf=0, minf=9 00:29:56.681 IO depths : 1=5.5%, 2=11.2%, 4=23.3%, 8=53.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:29:56.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 complete : 0=0.0%, 4=93.6%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.681 filename0: (groupid=0, jobs=1): err= 0: pid=91083: Wed May 15 02:30:42 2024 00:29:56.681 read: IOPS=89, BW=357KiB/s (365kB/s)(3640KiB/10204msec) 00:29:56.681 slat (usec): min=5, max=4046, avg=32.62, stdev=190.03 00:29:56.681 clat (msec): min=2, max=444, avg=179.00, stdev=118.74 00:29:56.681 lat (msec): min=2, max=444, avg=179.03, stdev=118.73 00:29:56.681 clat percentiles (msec): 00:29:56.681 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 21], 20.00th=[ 67], 00:29:56.681 | 30.00th=[ 105], 40.00th=[ 126], 50.00th=[ 165], 60.00th=[ 207], 00:29:56.681 | 70.00th=[ 220], 80.00th=[ 300], 90.00th=[ 338], 95.00th=[ 443], 00:29:56.681 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 443], 99.95th=[ 443], 00:29:56.681 | 99.99th=[ 443] 00:29:56.681 bw ( KiB/s): min= 128, max= 1405, per=5.33%, avg=357.15, stdev=300.39, samples=20 00:29:56.681 iops : min= 32, max= 351, avg=89.25, stdev=75.06, samples=20 00:29:56.681 lat (msec) : 4=1.76%, 10=4.51%, 20=3.41%, 50=2.64%, 100=16.48% 00:29:56.681 lat (msec) : 250=45.71%, 500=25.49% 00:29:56.681 cpu : usr=40.25%, sys=1.65%, ctx=1176, majf=0, minf=9 00:29:56.681 IO depths : 1=1.9%, 2=4.1%, 4=12.5%, 8=70.5%, 16=11.0%, 32=0.0%, >=64=0.0% 00:29:56.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 complete : 0=0.0%, 4=90.6%, 8=4.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 issued rwts: total=910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.681 filename0: (groupid=0, jobs=1): err= 0: pid=91084: Wed May 15 02:30:42 2024 00:29:56.681 read: IOPS=74, BW=299KiB/s (307kB/s)(3040KiB/10156msec) 00:29:56.681 slat (usec): min=4, max=8065, avg=28.76, stdev=293.48 00:29:56.681 clat (msec): min=41, max=602, avg=212.72, stdev=108.10 00:29:56.681 lat (msec): min=41, max=602, avg=212.75, stdev=108.10 00:29:56.681 clat percentiles (msec): 00:29:56.681 | 1.00th=[ 48], 5.00th=[ 77], 10.00th=[ 85], 20.00th=[ 108], 00:29:56.681 | 30.00th=[ 122], 40.00th=[ 176], 50.00th=[ 213], 60.00th=[ 228], 00:29:56.681 | 70.00th=[ 275], 80.00th=[ 300], 90.00th=[ 342], 95.00th=[ 393], 00:29:56.681 | 99.00th=[ 600], 99.50th=[ 600], 99.90th=[ 600], 99.95th=[ 600], 00:29:56.681 | 99.99th=[ 600] 00:29:56.681 bw ( KiB/s): min= 96, max= 688, per=4.44%, avg=297.60, stdev=152.00, samples=20 00:29:56.681 iops : min= 24, max= 172, avg=74.40, stdev=38.00, samples=20 00:29:56.681 lat (msec) : 50=2.24%, 100=15.53%, 250=45.13%, 500=35.26%, 750=1.84% 00:29:56.681 cpu : usr=38.15%, sys=1.38%, ctx=1059, majf=0, minf=9 00:29:56.681 IO depths : 1=0.1%, 2=0.3%, 4=4.9%, 8=80.5%, 16=14.2%, 32=0.0%, >=64=0.0% 00:29:56.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 complete : 0=0.0%, 4=89.0%, 8=7.2%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 issued rwts: total=760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.681 filename0: (groupid=0, jobs=1): err= 0: pid=91085: Wed May 15 02:30:42 2024 00:29:56.681 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10109msec) 00:29:56.681 slat (usec): min=3, max=5080, avg=38.82, stdev=213.92 00:29:56.681 clat (msec): min=60, max=575, avg=288.48, stdev=134.38 00:29:56.681 lat (msec): min=60, max=575, avg=288.52, stdev=134.39 00:29:56.681 clat percentiles (msec): 00:29:56.681 | 1.00th=[ 61], 5.00th=[ 96], 10.00th=[ 108], 20.00th=[ 144], 00:29:56.681 | 30.00th=[ 215], 40.00th=[ 253], 50.00th=[ 300], 60.00th=[ 313], 00:29:56.681 | 70.00th=[ 338], 80.00th=[ 422], 90.00th=[ 456], 95.00th=[ 542], 00:29:56.681 | 99.00th=[ 575], 99.50th=[ 575], 99.90th=[ 575], 99.95th=[ 575], 00:29:56.681 | 99.99th=[ 575] 00:29:56.681 bw ( KiB/s): min= 128, max= 512, per=3.24%, avg=217.45, stdev=110.54, samples=20 00:29:56.681 iops : min= 32, max= 128, avg=54.35, stdev=27.62, samples=20 00:29:56.681 lat (msec) : 100=8.57%, 250=28.57%, 500=54.29%, 750=8.57% 00:29:56.681 cpu : usr=31.73%, sys=0.97%, ctx=873, majf=0, minf=9 00:29:56.681 IO depths : 1=5.0%, 2=10.7%, 4=23.2%, 8=53.6%, 16=7.5%, 32=0.0%, >=64=0.0% 00:29:56.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.681 filename0: (groupid=0, jobs=1): err= 0: pid=91086: Wed May 15 02:30:42 2024 00:29:56.681 read: IOPS=76, BW=306KiB/s (313kB/s)(3104KiB/10151msec) 00:29:56.681 slat (usec): min=4, max=8058, avg=70.04, stdev=642.80 00:29:56.681 clat (msec): min=54, max=488, avg=208.87, stdev=92.75 00:29:56.681 lat (msec): min=54, max=488, avg=208.94, stdev=92.73 00:29:56.681 clat percentiles (msec): 00:29:56.681 | 1.00th=[ 55], 5.00th=[ 78], 10.00th=[ 96], 20.00th=[ 117], 00:29:56.681 | 30.00th=[ 157], 40.00th=[ 182], 50.00th=[ 205], 60.00th=[ 220], 00:29:56.681 | 70.00th=[ 253], 80.00th=[ 305], 90.00th=[ 338], 95.00th=[ 351], 00:29:56.681 | 99.00th=[ 456], 99.50th=[ 489], 99.90th=[ 489], 99.95th=[ 489], 00:29:56.681 | 99.99th=[ 489] 00:29:56.681 bw ( KiB/s): min= 128, max= 769, per=4.53%, avg=303.95, stdev=152.31, samples=20 00:29:56.681 iops : min= 32, max= 192, avg=75.95, stdev=38.06, samples=20 00:29:56.681 lat (msec) : 100=17.53%, 250=47.94%, 500=34.54% 00:29:56.681 cpu : usr=33.66%, sys=1.16%, ctx=971, majf=0, minf=9 00:29:56.681 IO depths : 1=2.3%, 2=4.6%, 4=12.4%, 8=70.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:29:56.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 complete : 0=0.0%, 4=90.7%, 8=4.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 issued rwts: total=776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.681 filename0: (groupid=0, jobs=1): err= 0: pid=91087: Wed May 15 02:30:42 2024 00:29:56.681 read: IOPS=70, BW=281KiB/s (287kB/s)(2840KiB/10120msec) 00:29:56.681 slat (usec): min=9, max=4058, avg=42.93, stdev=213.13 00:29:56.681 clat (msec): min=48, max=529, avg=227.73, stdev=135.41 00:29:56.681 lat (msec): min=48, max=529, avg=227.78, stdev=135.41 00:29:56.681 clat percentiles (msec): 00:29:56.681 | 1.00th=[ 49], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 84], 00:29:56.681 | 30.00th=[ 121], 40.00th=[ 165], 50.00th=[ 213], 60.00th=[ 266], 00:29:56.681 | 70.00th=[ 321], 80.00th=[ 334], 90.00th=[ 451], 95.00th=[ 456], 00:29:56.681 | 99.00th=[ 531], 99.50th=[ 531], 99.90th=[ 531], 99.95th=[ 531], 00:29:56.681 | 99.99th=[ 531] 00:29:56.681 bw ( KiB/s): min= 128, max= 816, per=4.14%, avg=277.60, stdev=191.74, samples=20 00:29:56.681 iops : min= 32, max= 204, avg=69.40, stdev=47.93, samples=20 00:29:56.681 lat (msec) : 50=2.82%, 100=21.83%, 250=28.59%, 500=44.51%, 750=2.25% 00:29:56.681 cpu : usr=39.86%, sys=1.73%, ctx=1447, majf=0, minf=9 00:29:56.681 IO depths : 1=3.1%, 2=6.2%, 4=14.6%, 8=65.6%, 16=10.4%, 32=0.0%, >=64=0.0% 00:29:56.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 complete : 0=0.0%, 4=91.6%, 8=3.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 issued rwts: total=710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.681 filename0: (groupid=0, jobs=1): err= 0: pid=91088: Wed May 15 02:30:42 2024 00:29:56.681 read: IOPS=56, BW=228KiB/s (233kB/s)(2304KiB/10117msec) 00:29:56.681 slat (nsec): min=7984, max=59652, avg=26043.22, stdev=9965.37 00:29:56.681 clat (msec): min=58, max=539, avg=280.78, stdev=116.98 00:29:56.681 lat (msec): min=58, max=539, avg=280.81, stdev=116.99 00:29:56.681 clat percentiles (msec): 00:29:56.681 | 1.00th=[ 59], 5.00th=[ 95], 10.00th=[ 106], 20.00th=[ 163], 00:29:56.681 | 30.00th=[ 213], 40.00th=[ 259], 50.00th=[ 296], 60.00th=[ 317], 00:29:56.681 | 70.00th=[ 330], 80.00th=[ 384], 90.00th=[ 468], 95.00th=[ 477], 00:29:56.681 | 99.00th=[ 481], 99.50th=[ 481], 99.90th=[ 542], 99.95th=[ 542], 00:29:56.681 | 99.99th=[ 542] 00:29:56.681 bw ( KiB/s): min= 128, max= 512, per=3.35%, avg=224.00, stdev=99.72, samples=20 00:29:56.681 iops : min= 32, max= 128, avg=56.00, stdev=24.93, samples=20 00:29:56.681 lat (msec) : 100=8.33%, 250=29.69%, 500=61.63%, 750=0.35% 00:29:56.681 cpu : usr=36.64%, sys=1.62%, ctx=1155, majf=0, minf=9 00:29:56.681 IO depths : 1=5.0%, 2=10.4%, 4=22.6%, 8=54.5%, 16=7.5%, 32=0.0%, >=64=0.0% 00:29:56.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.681 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.682 filename1: (groupid=0, jobs=1): err= 0: pid=91089: Wed May 15 02:30:42 2024 00:29:56.682 read: IOPS=79, BW=316KiB/s (324kB/s)(3216KiB/10168msec) 00:29:56.682 slat (usec): min=4, max=4041, avg=28.77, stdev=142.28 00:29:56.682 clat (msec): min=46, max=450, avg=201.45, stdev=93.03 00:29:56.682 lat (msec): min=46, max=450, avg=201.48, stdev=93.03 00:29:56.682 clat percentiles (msec): 00:29:56.682 | 1.00th=[ 47], 5.00th=[ 63], 10.00th=[ 83], 20.00th=[ 114], 00:29:56.682 | 30.00th=[ 132], 40.00th=[ 169], 50.00th=[ 205], 60.00th=[ 226], 00:29:56.682 | 70.00th=[ 271], 80.00th=[ 296], 90.00th=[ 321], 95.00th=[ 338], 00:29:56.682 | 99.00th=[ 372], 99.50th=[ 422], 99.90th=[ 451], 99.95th=[ 451], 00:29:56.682 | 99.99th=[ 451] 00:29:56.682 bw ( KiB/s): min= 174, max= 816, per=4.71%, avg=315.10, stdev=159.09, samples=20 00:29:56.682 iops : min= 43, max= 204, avg=78.75, stdev=39.80, samples=20 00:29:56.682 lat (msec) : 50=1.99%, 100=15.42%, 250=48.01%, 500=34.58% 00:29:56.682 cpu : usr=38.46%, sys=1.41%, ctx=1150, majf=0, minf=9 00:29:56.682 IO depths : 1=0.4%, 2=0.7%, 4=6.1%, 8=79.5%, 16=13.3%, 32=0.0%, >=64=0.0% 00:29:56.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.682 complete : 0=0.0%, 4=89.0%, 8=6.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.682 issued rwts: total=804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.682 filename1: (groupid=0, jobs=1): err= 0: pid=91090: Wed May 15 02:30:42 2024 00:29:56.682 read: IOPS=79, BW=318KiB/s (326kB/s)(3232KiB/10167msec) 00:29:56.682 slat (usec): min=4, max=8066, avg=29.50, stdev=283.31 00:29:56.682 clat (msec): min=51, max=519, avg=200.96, stdev=109.99 00:29:56.682 lat (msec): min=51, max=519, avg=200.99, stdev=109.98 00:29:56.682 clat percentiles (msec): 00:29:56.682 | 1.00th=[ 54], 5.00th=[ 64], 10.00th=[ 71], 20.00th=[ 101], 00:29:56.682 | 30.00th=[ 132], 40.00th=[ 169], 50.00th=[ 192], 60.00th=[ 215], 00:29:56.682 | 70.00th=[ 224], 80.00th=[ 268], 90.00th=[ 368], 95.00th=[ 447], 00:29:56.682 | 99.00th=[ 518], 99.50th=[ 518], 99.90th=[ 518], 99.95th=[ 518], 00:29:56.682 | 99.99th=[ 518] 00:29:56.682 bw ( KiB/s): min= 128, max= 728, per=4.72%, avg=316.80, stdev=165.45, samples=20 00:29:56.682 iops : min= 32, max= 182, avg=79.20, stdev=41.36, samples=20 00:29:56.682 lat (msec) : 100=20.05%, 250=57.67%, 500=20.05%, 750=2.23% 00:29:56.682 cpu : usr=33.83%, sys=1.40%, ctx=993, majf=0, minf=9 00:29:56.682 IO depths : 1=0.6%, 2=1.6%, 4=9.3%, 8=75.9%, 16=12.6%, 32=0.0%, >=64=0.0% 00:29:56.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.682 complete : 0=0.0%, 4=89.9%, 8=5.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.682 issued rwts: total=808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.682 filename1: (groupid=0, jobs=1): err= 0: pid=91091: Wed May 15 02:30:42 2024 00:29:56.682 read: IOPS=64, BW=256KiB/s (263kB/s)(2604KiB/10155msec) 00:29:56.682 slat (usec): min=4, max=8065, avg=69.44, stdev=566.36 00:29:56.682 clat (msec): min=46, max=500, avg=248.93, stdev=124.65 00:29:56.682 lat (msec): min=46, max=500, avg=249.00, stdev=124.71 00:29:56.682 clat percentiles (msec): 00:29:56.682 | 1.00th=[ 46], 5.00th=[ 61], 10.00th=[ 72], 20.00th=[ 108], 00:29:56.682 | 30.00th=[ 171], 40.00th=[ 236], 50.00th=[ 251], 60.00th=[ 305], 00:29:56.682 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 443], 95.00th=[ 456], 00:29:56.682 | 99.00th=[ 493], 99.50th=[ 493], 99.90th=[ 502], 99.95th=[ 502], 00:29:56.682 | 99.99th=[ 502] 00:29:56.682 bw ( KiB/s): min= 128, max= 697, per=3.80%, avg=254.05, stdev=165.70, samples=20 00:29:56.682 iops : min= 32, max= 174, avg=63.50, stdev=41.39, samples=20 00:29:56.682 lat (msec) : 50=2.46%, 100=16.28%, 250=30.11%, 500=50.69%, 750=0.46% 00:29:56.682 cpu : usr=31.89%, sys=0.91%, ctx=893, majf=0, minf=9 00:29:56.682 IO depths : 1=3.2%, 2=6.8%, 4=17.7%, 8=63.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:29:56.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.682 complete : 0=0.0%, 4=91.7%, 8=2.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.682 issued rwts: total=651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.682 filename1: (groupid=0, jobs=1): err= 0: pid=91092: Wed May 15 02:30:42 2024 00:29:56.682 read: IOPS=68, BW=275KiB/s (281kB/s)(2784KiB/10141msec) 00:29:56.682 slat (usec): min=5, max=8049, avg=33.92, stdev=304.50 00:29:56.682 clat (msec): min=46, max=487, avg=232.95, stdev=115.27 00:29:56.682 lat (msec): min=46, max=487, avg=232.99, stdev=115.27 00:29:56.682 clat percentiles (msec): 00:29:56.682 | 1.00th=[ 47], 5.00th=[ 88], 10.00th=[ 96], 20.00th=[ 121], 00:29:56.682 | 30.00th=[ 144], 40.00th=[ 178], 50.00th=[ 213], 60.00th=[ 275], 00:29:56.682 | 70.00th=[ 313], 80.00th=[ 334], 90.00th=[ 347], 95.00th=[ 451], 00:29:56.682 | 99.00th=[ 489], 99.50th=[ 489], 99.90th=[ 489], 99.95th=[ 489], 00:29:56.682 | 99.99th=[ 489] 00:29:56.682 bw ( KiB/s): min= 128, max= 640, per=4.06%, avg=272.00, stdev=144.98, samples=20 00:29:56.682 iops : min= 32, max= 160, avg=68.00, stdev=36.24, samples=20 00:29:56.682 lat (msec) : 50=2.01%, 100=10.78%, 250=42.67%, 500=44.54% 00:29:56.682 cpu : usr=40.62%, sys=1.79%, ctx=1237, majf=0, minf=9 00:29:56.682 IO depths : 1=2.0%, 2=4.5%, 4=12.4%, 8=69.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:29:56.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.682 complete : 0=0.0%, 4=91.2%, 8=4.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.682 issued rwts: total=696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.682 filename1: (groupid=0, jobs=1): err= 0: pid=91093: Wed May 15 02:30:42 2024 00:29:56.682 read: IOPS=59, BW=240KiB/s (245kB/s)(2432KiB/10147msec) 00:29:56.682 slat (usec): min=4, max=8047, avg=36.25, stdev=325.77 00:29:56.682 clat (msec): min=47, max=532, avg=266.51, stdev=128.55 00:29:56.682 lat (msec): min=47, max=532, avg=266.54, stdev=128.58 00:29:56.682 clat percentiles (msec): 00:29:56.682 | 1.00th=[ 48], 5.00th=[ 89], 10.00th=[ 104], 20.00th=[ 115], 00:29:56.682 | 30.00th=[ 188], 40.00th=[ 224], 50.00th=[ 279], 60.00th=[ 313], 00:29:56.682 | 70.00th=[ 330], 80.00th=[ 401], 90.00th=[ 447], 95.00th=[ 456], 00:29:56.682 | 99.00th=[ 531], 99.50th=[ 531], 99.90th=[ 531], 99.95th=[ 531], 00:29:56.682 | 99.99th=[ 531] 00:29:56.682 bw ( KiB/s): min= 128, max= 640, per=3.53%, avg=236.65, stdev=131.98, samples=20 00:29:56.682 iops : min= 32, max= 160, avg=59.15, stdev=32.99, samples=20 00:29:56.682 lat (msec) : 50=2.63%, 100=3.78%, 250=41.78%, 500=49.18%, 750=2.63% 00:29:56.682 cpu : usr=37.26%, sys=1.41%, ctx=1181, majf=0, minf=9 00:29:56.682 IO depths : 1=2.8%, 2=5.6%, 4=17.1%, 8=64.5%, 16=10.0%, 32=0.0%, >=64=0.0% 00:29:56.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.682 complete : 0=0.0%, 4=90.9%, 8=3.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.682 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.682 filename1: (groupid=0, jobs=1): err= 0: pid=91094: Wed May 15 02:30:42 2024 00:29:56.682 read: IOPS=79, BW=318KiB/s (326kB/s)(3236KiB/10179msec) 00:29:56.682 slat (usec): min=4, max=8049, avg=48.56, stdev=337.67 00:29:56.682 clat (msec): min=8, max=440, avg=200.94, stdev=120.56 00:29:56.682 lat (msec): min=8, max=441, avg=200.99, stdev=120.57 00:29:56.682 clat percentiles (msec): 00:29:56.682 | 1.00th=[ 10], 5.00th=[ 16], 10.00th=[ 52], 20.00th=[ 79], 00:29:56.682 | 30.00th=[ 114], 40.00th=[ 157], 50.00th=[ 209], 60.00th=[ 222], 00:29:56.682 | 70.00th=[ 255], 80.00th=[ 313], 90.00th=[ 363], 95.00th=[ 439], 00:29:56.682 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 443], 99.95th=[ 443], 00:29:56.682 | 99.99th=[ 443] 00:29:56.682 bw ( KiB/s): min= 128, max= 1208, per=4.74%, avg=317.20, stdev=250.54, samples=20 00:29:56.682 iops : min= 32, max= 302, avg=79.30, stdev=62.63, samples=20 00:29:56.682 lat (msec) : 10=1.98%, 20=3.09%, 50=4.57%, 100=16.93%, 250=41.78% 00:29:56.682 lat (msec) : 500=31.64% 00:29:56.682 cpu : usr=40.77%, sys=1.74%, ctx=1513, majf=0, minf=9 00:29:56.682 IO depths : 1=3.2%, 2=6.6%, 4=15.8%, 8=65.0%, 16=9.4%, 32=0.0%, >=64=0.0% 00:29:56.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.682 complete : 0=0.0%, 4=91.6%, 8=2.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.682 issued rwts: total=809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.682 filename1: (groupid=0, jobs=1): err= 0: pid=91095: Wed May 15 02:30:42 2024 00:29:56.682 read: IOPS=59, BW=240KiB/s (246kB/s)(2432KiB/10138msec) 00:29:56.682 slat (usec): min=7, max=4047, avg=27.00, stdev=163.62 00:29:56.682 clat (msec): min=65, max=479, avg=266.51, stdev=123.96 00:29:56.682 lat (msec): min=66, max=480, avg=266.53, stdev=123.95 00:29:56.682 clat percentiles (msec): 00:29:56.682 | 1.00th=[ 67], 5.00th=[ 96], 10.00th=[ 102], 20.00th=[ 131], 00:29:56.682 | 30.00th=[ 169], 40.00th=[ 215], 50.00th=[ 279], 60.00th=[ 313], 00:29:56.682 | 70.00th=[ 330], 80.00th=[ 384], 90.00th=[ 468], 95.00th=[ 472], 00:29:56.682 | 99.00th=[ 481], 99.50th=[ 481], 99.90th=[ 481], 99.95th=[ 481], 00:29:56.682 | 99.99th=[ 481] 00:29:56.682 bw ( KiB/s): min= 128, max= 512, per=3.53%, avg=236.70, stdev=126.46, samples=20 00:29:56.682 iops : min= 32, max= 128, avg=59.15, stdev=31.61, samples=20 00:29:56.682 lat (msec) : 100=8.22%, 250=32.89%, 500=58.88% 00:29:56.682 cpu : usr=37.31%, sys=1.33%, ctx=1036, majf=0, minf=9 00:29:56.682 IO depths : 1=4.8%, 2=10.2%, 4=22.5%, 8=54.8%, 16=7.7%, 32=0.0%, >=64=0.0% 00:29:56.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.682 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.682 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.682 filename1: (groupid=0, jobs=1): err= 0: pid=91096: Wed May 15 02:30:42 2024 00:29:56.682 read: IOPS=56, BW=228KiB/s (233kB/s)(2304KiB/10121msec) 00:29:56.682 slat (usec): min=4, max=8057, avg=44.59, stdev=366.81 00:29:56.682 clat (msec): min=71, max=623, avg=280.72, stdev=116.03 00:29:56.682 lat (msec): min=71, max=623, avg=280.77, stdev=116.04 00:29:56.682 clat percentiles (msec): 00:29:56.682 | 1.00th=[ 84], 5.00th=[ 104], 10.00th=[ 115], 20.00th=[ 176], 00:29:56.682 | 30.00th=[ 211], 40.00th=[ 247], 50.00th=[ 284], 60.00th=[ 313], 00:29:56.682 | 70.00th=[ 334], 80.00th=[ 388], 90.00th=[ 464], 95.00th=[ 472], 00:29:56.683 | 99.00th=[ 477], 99.50th=[ 477], 99.90th=[ 625], 99.95th=[ 625], 00:29:56.683 | 99.99th=[ 625] 00:29:56.683 bw ( KiB/s): min= 128, max= 512, per=3.35%, avg=224.00, stdev=108.89, samples=20 00:29:56.683 iops : min= 32, max= 128, avg=56.00, stdev=27.22, samples=20 00:29:56.683 lat (msec) : 100=3.12%, 250=37.50%, 500=59.03%, 750=0.35% 00:29:56.683 cpu : usr=36.75%, sys=1.35%, ctx=1185, majf=0, minf=9 00:29:56.683 IO depths : 1=5.2%, 2=10.6%, 4=22.4%, 8=54.5%, 16=7.3%, 32=0.0%, >=64=0.0% 00:29:56.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.683 filename2: (groupid=0, jobs=1): err= 0: pid=91097: Wed May 15 02:30:42 2024 00:29:56.683 read: IOPS=62, BW=250KiB/s (256kB/s)(2540KiB/10157msec) 00:29:56.683 slat (usec): min=4, max=8049, avg=35.54, stdev=318.80 00:29:56.683 clat (msec): min=87, max=671, avg=255.46, stdev=123.03 00:29:56.683 lat (msec): min=87, max=671, avg=255.49, stdev=123.05 00:29:56.683 clat percentiles (msec): 00:29:56.683 | 1.00th=[ 88], 5.00th=[ 93], 10.00th=[ 93], 20.00th=[ 111], 00:29:56.683 | 30.00th=[ 167], 40.00th=[ 205], 50.00th=[ 268], 60.00th=[ 309], 00:29:56.683 | 70.00th=[ 317], 80.00th=[ 355], 90.00th=[ 435], 95.00th=[ 447], 00:29:56.683 | 99.00th=[ 498], 99.50th=[ 676], 99.90th=[ 676], 99.95th=[ 676], 00:29:56.683 | 99.99th=[ 676] 00:29:56.683 bw ( KiB/s): min= 88, max= 640, per=3.69%, avg=247.60, stdev=136.59, samples=20 00:29:56.683 iops : min= 22, max= 160, avg=61.90, stdev=34.15, samples=20 00:29:56.683 lat (msec) : 100=12.60%, 250=34.49%, 500=52.13%, 750=0.79% 00:29:56.683 cpu : usr=38.14%, sys=1.73%, ctx=1153, majf=0, minf=9 00:29:56.683 IO depths : 1=5.7%, 2=11.5%, 4=23.9%, 8=52.1%, 16=6.8%, 32=0.0%, >=64=0.0% 00:29:56.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 complete : 0=0.0%, 4=93.7%, 8=0.4%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 issued rwts: total=635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.683 filename2: (groupid=0, jobs=1): err= 0: pid=91098: Wed May 15 02:30:42 2024 00:29:56.683 read: IOPS=77, BW=308KiB/s (316kB/s)(3136KiB/10169msec) 00:29:56.683 slat (usec): min=7, max=11055, avg=67.93, stdev=544.22 00:29:56.683 clat (msec): min=39, max=437, avg=206.58, stdev=94.23 00:29:56.683 lat (msec): min=39, max=437, avg=206.64, stdev=94.24 00:29:56.683 clat percentiles (msec): 00:29:56.683 | 1.00th=[ 41], 5.00th=[ 59], 10.00th=[ 69], 20.00th=[ 110], 00:29:56.683 | 30.00th=[ 146], 40.00th=[ 180], 50.00th=[ 215], 60.00th=[ 243], 00:29:56.683 | 70.00th=[ 279], 80.00th=[ 305], 90.00th=[ 317], 95.00th=[ 351], 00:29:56.683 | 99.00th=[ 363], 99.50th=[ 439], 99.90th=[ 439], 99.95th=[ 439], 00:29:56.683 | 99.99th=[ 439] 00:29:56.683 bw ( KiB/s): min= 168, max= 880, per=4.59%, avg=307.10, stdev=169.66, samples=20 00:29:56.683 iops : min= 42, max= 220, avg=76.75, stdev=42.43, samples=20 00:29:56.683 lat (msec) : 50=4.08%, 100=13.52%, 250=43.88%, 500=38.52% 00:29:56.683 cpu : usr=38.60%, sys=1.61%, ctx=1393, majf=0, minf=9 00:29:56.683 IO depths : 1=1.4%, 2=3.3%, 4=13.5%, 8=70.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:29:56.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 complete : 0=0.0%, 4=90.4%, 8=4.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.683 filename2: (groupid=0, jobs=1): err= 0: pid=91099: Wed May 15 02:30:42 2024 00:29:56.683 read: IOPS=73, BW=294KiB/s (301kB/s)(2992KiB/10171msec) 00:29:56.683 slat (nsec): min=4000, max=63339, avg=28325.42, stdev=15930.53 00:29:56.683 clat (msec): min=10, max=526, avg=217.34, stdev=115.94 00:29:56.683 lat (msec): min=10, max=526, avg=217.37, stdev=115.95 00:29:56.683 clat percentiles (msec): 00:29:56.683 | 1.00th=[ 11], 5.00th=[ 35], 10.00th=[ 62], 20.00th=[ 107], 00:29:56.683 | 30.00th=[ 144], 40.00th=[ 192], 50.00th=[ 215], 60.00th=[ 251], 00:29:56.683 | 70.00th=[ 300], 80.00th=[ 326], 90.00th=[ 372], 95.00th=[ 409], 00:29:56.683 | 99.00th=[ 435], 99.50th=[ 527], 99.90th=[ 527], 99.95th=[ 527], 00:29:56.683 | 99.99th=[ 527] 00:29:56.683 bw ( KiB/s): min= 128, max= 944, per=4.36%, avg=292.80, stdev=196.48, samples=20 00:29:56.683 iops : min= 32, max= 236, avg=73.20, stdev=49.12, samples=20 00:29:56.683 lat (msec) : 20=4.28%, 50=4.41%, 100=10.03%, 250=41.18%, 500=39.44% 00:29:56.683 lat (msec) : 750=0.67% 00:29:56.683 cpu : usr=31.87%, sys=0.98%, ctx=895, majf=0, minf=9 00:29:56.683 IO depths : 1=3.6%, 2=7.2%, 4=17.6%, 8=62.2%, 16=9.4%, 32=0.0%, >=64=0.0% 00:29:56.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 complete : 0=0.0%, 4=91.8%, 8=2.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 issued rwts: total=748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.683 filename2: (groupid=0, jobs=1): err= 0: pid=91100: Wed May 15 02:30:42 2024 00:29:56.683 read: IOPS=66, BW=265KiB/s (272kB/s)(2688KiB/10135msec) 00:29:56.683 slat (usec): min=4, max=4048, avg=27.62, stdev=155.87 00:29:56.683 clat (msec): min=49, max=497, avg=241.10, stdev=107.72 00:29:56.683 lat (msec): min=49, max=497, avg=241.13, stdev=107.72 00:29:56.683 clat percentiles (msec): 00:29:56.683 | 1.00th=[ 50], 5.00th=[ 71], 10.00th=[ 96], 20.00th=[ 136], 00:29:56.683 | 30.00th=[ 163], 40.00th=[ 207], 50.00th=[ 255], 60.00th=[ 292], 00:29:56.683 | 70.00th=[ 313], 80.00th=[ 338], 90.00th=[ 363], 95.00th=[ 435], 00:29:56.683 | 99.00th=[ 498], 99.50th=[ 498], 99.90th=[ 498], 99.95th=[ 498], 00:29:56.683 | 99.99th=[ 498] 00:29:56.683 bw ( KiB/s): min= 128, max= 640, per=3.91%, avg=262.40, stdev=132.67, samples=20 00:29:56.683 iops : min= 32, max= 160, avg=65.60, stdev=33.17, samples=20 00:29:56.683 lat (msec) : 50=2.38%, 100=11.90%, 250=34.97%, 500=50.74% 00:29:56.683 cpu : usr=41.16%, sys=1.87%, ctx=1292, majf=0, minf=9 00:29:56.683 IO depths : 1=2.2%, 2=4.8%, 4=17.3%, 8=65.3%, 16=10.4%, 32=0.0%, >=64=0.0% 00:29:56.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 complete : 0=0.0%, 4=90.8%, 8=3.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.683 filename2: (groupid=0, jobs=1): err= 0: pid=91101: Wed May 15 02:30:42 2024 00:29:56.683 read: IOPS=67, BW=269KiB/s (275kB/s)(2728KiB/10159msec) 00:29:56.683 slat (nsec): min=7702, max=69441, avg=24818.20, stdev=11815.90 00:29:56.683 clat (msec): min=51, max=472, avg=237.50, stdev=104.36 00:29:56.683 lat (msec): min=51, max=472, avg=237.52, stdev=104.36 00:29:56.683 clat percentiles (msec): 00:29:56.683 | 1.00th=[ 59], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 124], 00:29:56.683 | 30.00th=[ 146], 40.00th=[ 209], 50.00th=[ 264], 60.00th=[ 296], 00:29:56.683 | 70.00th=[ 305], 80.00th=[ 321], 90.00th=[ 355], 95.00th=[ 447], 00:29:56.683 | 99.00th=[ 460], 99.50th=[ 472], 99.90th=[ 472], 99.95th=[ 472], 00:29:56.683 | 99.99th=[ 472] 00:29:56.683 bw ( KiB/s): min= 128, max= 640, per=3.97%, avg=266.40, stdev=142.47, samples=20 00:29:56.683 iops : min= 32, max= 160, avg=66.60, stdev=35.62, samples=20 00:29:56.683 lat (msec) : 100=11.73%, 250=38.12%, 500=50.15% 00:29:56.683 cpu : usr=37.15%, sys=1.89%, ctx=1104, majf=0, minf=9 00:29:56.683 IO depths : 1=4.7%, 2=9.5%, 4=21.0%, 8=57.0%, 16=7.8%, 32=0.0%, >=64=0.0% 00:29:56.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 complete : 0=0.0%, 4=92.8%, 8=1.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 issued rwts: total=682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.683 filename2: (groupid=0, jobs=1): err= 0: pid=91102: Wed May 15 02:30:42 2024 00:29:56.683 read: IOPS=69, BW=279KiB/s (286kB/s)(2840KiB/10170msec) 00:29:56.683 slat (nsec): min=4850, max=84293, avg=16637.82, stdev=10320.66 00:29:56.683 clat (msec): min=56, max=596, avg=228.76, stdev=136.97 00:29:56.683 lat (msec): min=56, max=596, avg=228.77, stdev=136.97 00:29:56.683 clat percentiles (msec): 00:29:56.683 | 1.00th=[ 59], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 72], 00:29:56.683 | 30.00th=[ 121], 40.00th=[ 167], 50.00th=[ 226], 60.00th=[ 262], 00:29:56.683 | 70.00th=[ 300], 80.00th=[ 342], 90.00th=[ 443], 95.00th=[ 485], 00:29:56.683 | 99.00th=[ 527], 99.50th=[ 600], 99.90th=[ 600], 99.95th=[ 600], 00:29:56.683 | 99.99th=[ 600] 00:29:56.683 bw ( KiB/s): min= 128, max= 816, per=4.14%, avg=277.60, stdev=204.34, samples=20 00:29:56.683 iops : min= 32, max= 204, avg=69.40, stdev=51.09, samples=20 00:29:56.683 lat (msec) : 100=28.17%, 250=28.87%, 500=40.00%, 750=2.96% 00:29:56.683 cpu : usr=35.42%, sys=1.50%, ctx=964, majf=0, minf=9 00:29:56.683 IO depths : 1=3.0%, 2=6.5%, 4=17.0%, 8=63.8%, 16=9.7%, 32=0.0%, >=64=0.0% 00:29:56.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 complete : 0=0.0%, 4=91.7%, 8=2.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 issued rwts: total=710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.683 filename2: (groupid=0, jobs=1): err= 0: pid=91103: Wed May 15 02:30:42 2024 00:29:56.683 read: IOPS=76, BW=305KiB/s (312kB/s)(3096KiB/10159msec) 00:29:56.683 slat (usec): min=4, max=8059, avg=36.19, stdev=288.95 00:29:56.683 clat (msec): min=58, max=359, avg=209.17, stdev=81.64 00:29:56.683 lat (msec): min=58, max=359, avg=209.21, stdev=81.63 00:29:56.683 clat percentiles (msec): 00:29:56.683 | 1.00th=[ 59], 5.00th=[ 82], 10.00th=[ 97], 20.00th=[ 121], 00:29:56.683 | 30.00th=[ 155], 40.00th=[ 190], 50.00th=[ 207], 60.00th=[ 228], 00:29:56.683 | 70.00th=[ 262], 80.00th=[ 300], 90.00th=[ 317], 95.00th=[ 338], 00:29:56.683 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:29:56.683 | 99.99th=[ 359] 00:29:56.683 bw ( KiB/s): min= 176, max= 640, per=4.53%, avg=303.20, stdev=120.46, samples=20 00:29:56.683 iops : min= 44, max= 160, avg=75.80, stdev=30.11, samples=20 00:29:56.683 lat (msec) : 100=11.11%, 250=54.39%, 500=34.50% 00:29:56.683 cpu : usr=36.52%, sys=1.56%, ctx=1100, majf=0, minf=9 00:29:56.683 IO depths : 1=0.9%, 2=2.2%, 4=9.9%, 8=74.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:29:56.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 complete : 0=0.0%, 4=90.0%, 8=4.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.683 issued rwts: total=774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.683 filename2: (groupid=0, jobs=1): err= 0: pid=91104: Wed May 15 02:30:42 2024 00:29:56.684 read: IOPS=58, BW=234KiB/s (239kB/s)(2368KiB/10128msec) 00:29:56.684 slat (usec): min=4, max=4068, avg=30.74, stdev=171.84 00:29:56.684 clat (msec): min=75, max=539, avg=273.45, stdev=121.31 00:29:56.684 lat (msec): min=75, max=539, avg=273.48, stdev=121.31 00:29:56.684 clat percentiles (msec): 00:29:56.684 | 1.00th=[ 77], 5.00th=[ 97], 10.00th=[ 103], 20.00th=[ 142], 00:29:56.684 | 30.00th=[ 182], 40.00th=[ 226], 50.00th=[ 309], 60.00th=[ 313], 00:29:56.684 | 70.00th=[ 330], 80.00th=[ 384], 90.00th=[ 456], 95.00th=[ 477], 00:29:56.684 | 99.00th=[ 485], 99.50th=[ 485], 99.90th=[ 542], 99.95th=[ 542], 00:29:56.684 | 99.99th=[ 542] 00:29:56.684 bw ( KiB/s): min= 128, max= 512, per=3.44%, avg=230.40, stdev=127.94, samples=20 00:29:56.684 iops : min= 32, max= 128, avg=57.60, stdev=31.98, samples=20 00:29:56.684 lat (msec) : 100=5.41%, 250=35.14%, 500=59.12%, 750=0.34% 00:29:56.684 cpu : usr=41.33%, sys=1.52%, ctx=1146, majf=0, minf=9 00:29:56.684 IO depths : 1=5.6%, 2=11.5%, 4=23.8%, 8=52.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:29:56.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.684 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.684 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.684 00:29:56.684 Run status group 0 (all jobs): 00:29:56.684 READ: bw=6693KiB/s (6853kB/s), 222KiB/s-361KiB/s (227kB/s-369kB/s), io=66.7MiB (69.9MB), run=10109-10204msec 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.684 bdev_null0 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.684 [2024-05-15 02:30:42.954941] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.684 bdev_null1 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:56.684 { 00:29:56.684 "params": { 00:29:56.684 "name": "Nvme$subsystem", 00:29:56.684 "trtype": "$TEST_TRANSPORT", 00:29:56.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.684 "adrfam": "ipv4", 00:29:56.684 "trsvcid": "$NVMF_PORT", 00:29:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.684 "hdgst": ${hdgst:-false}, 00:29:56.684 "ddgst": ${ddgst:-false} 00:29:56.684 }, 00:29:56.684 "method": "bdev_nvme_attach_controller" 00:29:56.684 } 00:29:56.684 EOF 00:29:56.684 )") 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:56.684 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:56.685 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:29:56.685 02:30:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:56.685 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:56.685 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.685 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:56.685 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:56.685 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:56.685 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:29:56.685 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:56.685 02:30:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:56.685 02:30:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:56.685 { 00:29:56.685 "params": { 00:29:56.685 "name": "Nvme$subsystem", 00:29:56.685 "trtype": "$TEST_TRANSPORT", 00:29:56.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.685 "adrfam": "ipv4", 00:29:56.685 "trsvcid": "$NVMF_PORT", 00:29:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.685 "hdgst": ${hdgst:-false}, 00:29:56.685 "ddgst": ${ddgst:-false} 00:29:56.685 }, 00:29:56.685 "method": "bdev_nvme_attach_controller" 00:29:56.685 } 00:29:56.685 EOF 00:29:56.685 )") 00:29:56.685 02:30:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:56.685 02:30:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:56.685 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:56.685 02:30:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:56.685 02:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:56.685 02:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:56.685 02:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:56.685 "params": { 00:29:56.685 "name": "Nvme0", 00:29:56.685 "trtype": "tcp", 00:29:56.685 "traddr": "10.0.0.2", 00:29:56.685 "adrfam": "ipv4", 00:29:56.685 "trsvcid": "4420", 00:29:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:56.685 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:56.685 "hdgst": false, 00:29:56.685 "ddgst": false 00:29:56.685 }, 00:29:56.685 "method": "bdev_nvme_attach_controller" 00:29:56.685 },{ 00:29:56.685 "params": { 00:29:56.685 "name": "Nvme1", 00:29:56.685 "trtype": "tcp", 00:29:56.685 "traddr": "10.0.0.2", 00:29:56.685 "adrfam": "ipv4", 00:29:56.685 "trsvcid": "4420", 00:29:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:56.685 "hdgst": false, 00:29:56.685 "ddgst": false 00:29:56.685 }, 00:29:56.685 "method": "bdev_nvme_attach_controller" 00:29:56.685 }' 00:29:56.685 02:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:56.685 02:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:56.685 02:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.685 02:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:56.685 02:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:56.685 02:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:56.685 02:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:56.685 02:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:56.685 02:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:56.685 02:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:56.685 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:56.685 ... 00:29:56.685 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:56.685 ... 00:29:56.685 fio-3.35 00:29:56.685 Starting 4 threads 00:30:00.915 00:30:00.915 filename0: (groupid=0, jobs=1): err= 0: pid=91165: Wed May 15 02:30:48 2024 00:30:00.915 read: IOPS=1801, BW=14.1MiB/s (14.8MB/s)(70.4MiB/5001msec) 00:30:00.915 slat (usec): min=7, max=411, avg=12.34, stdev= 7.56 00:30:00.915 clat (usec): min=2279, max=10923, avg=4385.24, stdev=622.28 00:30:00.915 lat (usec): min=2287, max=10944, avg=4397.58, stdev=622.65 00:30:00.915 clat percentiles (usec): 00:30:00.915 | 1.00th=[ 3818], 5.00th=[ 4080], 10.00th=[ 4113], 20.00th=[ 4146], 00:30:00.915 | 30.00th=[ 4178], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4228], 00:30:00.915 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 5604], 00:30:00.915 | 99.00th=[ 7767], 99.50th=[ 7963], 99.90th=[ 8356], 99.95th=[10814], 00:30:00.915 | 99.99th=[10945] 00:30:00.915 bw ( KiB/s): min=12800, max=15104, per=24.89%, avg=14353.33, stdev=817.85, samples=9 00:30:00.915 iops : min= 1600, max= 1888, avg=1794.11, stdev=102.22, samples=9 00:30:00.915 lat (msec) : 4=1.70%, 10=98.21%, 20=0.09% 00:30:00.915 cpu : usr=91.38%, sys=6.62%, ctx=58, majf=0, minf=9 00:30:00.915 IO depths : 1=7.4%, 2=16.7%, 4=58.3%, 8=17.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:00.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.915 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.915 issued rwts: total=9011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.915 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:00.915 filename0: (groupid=0, jobs=1): err= 0: pid=91166: Wed May 15 02:30:48 2024 00:30:00.915 read: IOPS=1797, BW=14.0MiB/s (14.7MB/s)(70.2MiB/5002msec) 00:30:00.915 slat (nsec): min=5329, max=67244, avg=17391.67, stdev=6146.99 00:30:00.915 clat (usec): min=2056, max=13434, avg=4364.01, stdev=681.31 00:30:00.915 lat (usec): min=2081, max=13450, avg=4381.40, stdev=681.39 00:30:00.915 clat percentiles (usec): 00:30:00.915 | 1.00th=[ 3916], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4113], 00:30:00.915 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4178], 60.00th=[ 4228], 00:30:00.915 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 5604], 00:30:00.915 | 99.00th=[ 7832], 99.50th=[ 8029], 99.90th=[11076], 99.95th=[11207], 00:30:00.915 | 99.99th=[13435] 00:30:00.915 bw ( KiB/s): min=12800, max=15104, per=24.84%, avg=14321.78, stdev=888.45, samples=9 00:30:00.915 iops : min= 1600, max= 1888, avg=1790.22, stdev=111.06, samples=9 00:30:00.915 lat (msec) : 4=1.42%, 10=98.38%, 20=0.20% 00:30:00.915 cpu : usr=91.06%, sys=6.42%, ctx=20, majf=0, minf=0 00:30:00.915 IO depths : 1=8.1%, 2=24.4%, 4=50.6%, 8=16.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:00.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.915 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.915 issued rwts: total=8992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.915 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:00.915 filename1: (groupid=0, jobs=1): err= 0: pid=91167: Wed May 15 02:30:48 2024 00:30:00.915 read: IOPS=1804, BW=14.1MiB/s (14.8MB/s)(70.5MiB/5002msec) 00:30:00.915 slat (nsec): min=4768, max=55819, avg=14286.98, stdev=6648.11 00:30:00.915 clat (usec): min=1342, max=11292, avg=4354.50, stdev=629.87 00:30:00.915 lat (usec): min=1352, max=11302, avg=4368.78, stdev=630.43 00:30:00.915 clat percentiles (usec): 00:30:00.915 | 1.00th=[ 3982], 5.00th=[ 4080], 10.00th=[ 4080], 20.00th=[ 4113], 00:30:00.915 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4178], 60.00th=[ 4228], 00:30:00.915 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 5604], 00:30:00.915 | 99.00th=[ 7767], 99.50th=[ 7963], 99.90th=[ 8455], 99.95th=[10814], 00:30:00.915 | 99.99th=[11338] 00:30:00.915 bw ( KiB/s): min=12800, max=15104, per=24.94%, avg=14378.67, stdev=853.87, samples=9 00:30:00.915 iops : min= 1600, max= 1888, avg=1797.33, stdev=106.73, samples=9 00:30:00.915 lat (msec) : 2=0.18%, 4=0.95%, 10=98.78%, 20=0.09% 00:30:00.915 cpu : usr=90.70%, sys=6.70%, ctx=5, majf=0, minf=0 00:30:00.915 IO depths : 1=12.1%, 2=25.0%, 4=50.0%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:00.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.915 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.915 issued rwts: total=9024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.915 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:00.915 filename1: (groupid=0, jobs=1): err= 0: pid=91168: Wed May 15 02:30:48 2024 00:30:00.915 read: IOPS=1804, BW=14.1MiB/s (14.8MB/s)(70.5MiB/5002msec) 00:30:00.915 slat (nsec): min=7845, max=58495, avg=17677.04, stdev=5631.39 00:30:00.915 clat (usec): min=2886, max=11117, avg=4341.51, stdev=614.37 00:30:00.915 lat (usec): min=2894, max=11147, avg=4359.19, stdev=614.52 00:30:00.915 clat percentiles (usec): 00:30:00.915 | 1.00th=[ 3982], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4113], 00:30:00.915 | 30.00th=[ 4146], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:30:00.915 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 5604], 00:30:00.915 | 99.00th=[ 7767], 99.50th=[ 7963], 99.90th=[ 9503], 99.95th=[11076], 00:30:00.915 | 99.99th=[11076] 00:30:00.915 bw ( KiB/s): min=12800, max=15104, per=24.94%, avg=14378.67, stdev=819.60, samples=9 00:30:00.915 iops : min= 1600, max= 1888, avg=1797.33, stdev=102.45, samples=9 00:30:00.915 lat (msec) : 4=1.25%, 10=98.66%, 20=0.09% 00:30:00.915 cpu : usr=91.30%, sys=6.44%, ctx=25, majf=0, minf=0 00:30:00.915 IO depths : 1=12.2%, 2=25.0%, 4=50.0%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:00.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.915 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.915 issued rwts: total=9024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.915 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:00.915 00:30:00.915 Run status group 0 (all jobs): 00:30:00.916 READ: bw=56.3MiB/s (59.0MB/s), 14.0MiB/s-14.1MiB/s (14.7MB/s-14.8MB/s), io=282MiB (295MB), run=5001-5002msec 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:01.174 ************************************ 00:30:01.174 END TEST fio_dif_rand_params 00:30:01.174 ************************************ 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.174 00:30:01.174 real 0m23.597s 00:30:01.174 user 2m5.177s 00:30:01.174 sys 0m6.590s 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:01.174 02:30:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:01.174 02:30:49 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:01.174 02:30:49 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:01.174 02:30:49 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:01.174 02:30:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:01.174 ************************************ 00:30:01.174 START TEST fio_dif_digest 00:30:01.174 ************************************ 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:01.174 bdev_null0 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.174 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:01.175 [2024-05-15 02:30:49.139916] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:01.175 { 00:30:01.175 "params": { 00:30:01.175 "name": "Nvme$subsystem", 00:30:01.175 "trtype": "$TEST_TRANSPORT", 00:30:01.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.175 "adrfam": "ipv4", 00:30:01.175 "trsvcid": "$NVMF_PORT", 00:30:01.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.175 "hdgst": ${hdgst:-false}, 00:30:01.175 "ddgst": ${ddgst:-false} 00:30:01.175 }, 00:30:01.175 "method": "bdev_nvme_attach_controller" 00:30:01.175 } 00:30:01.175 EOF 00:30:01.175 )") 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:01.175 "params": { 00:30:01.175 "name": "Nvme0", 00:30:01.175 "trtype": "tcp", 00:30:01.175 "traddr": "10.0.0.2", 00:30:01.175 "adrfam": "ipv4", 00:30:01.175 "trsvcid": "4420", 00:30:01.175 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:01.175 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:01.175 "hdgst": true, 00:30:01.175 "ddgst": true 00:30:01.175 }, 00:30:01.175 "method": "bdev_nvme_attach_controller" 00:30:01.175 }' 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:01.175 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:01.433 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:01.433 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:01.433 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:01.433 02:30:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:01.433 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:01.433 ... 00:30:01.433 fio-3.35 00:30:01.433 Starting 3 threads 00:30:13.635 00:30:13.635 filename0: (groupid=0, jobs=1): err= 0: pid=91238: Wed May 15 02:30:59 2024 00:30:13.635 read: IOPS=221, BW=27.7MiB/s (29.0MB/s)(277MiB/10004msec) 00:30:13.635 slat (nsec): min=4745, max=73942, avg=17501.24, stdev=7482.85 00:30:13.635 clat (usec): min=8074, max=53370, avg=13519.39, stdev=2138.84 00:30:13.635 lat (usec): min=8086, max=53386, avg=13536.89, stdev=2139.51 00:30:13.635 clat percentiles (usec): 00:30:13.635 | 1.00th=[10552], 5.00th=[11600], 10.00th=[11994], 20.00th=[12387], 00:30:13.635 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:30:13.635 | 70.00th=[13698], 80.00th=[14222], 90.00th=[15270], 95.00th=[16712], 00:30:13.635 | 99.00th=[18744], 99.50th=[21103], 99.90th=[52691], 99.95th=[52691], 00:30:13.635 | 99.99th=[53216] 00:30:13.635 bw ( KiB/s): min=23552, max=31232, per=37.48%, avg=28362.11, stdev=1722.76, samples=19 00:30:13.635 iops : min= 184, max= 244, avg=221.58, stdev=13.46, samples=19 00:30:13.635 lat (msec) : 10=0.50%, 20=98.69%, 50=0.68%, 100=0.14% 00:30:13.635 cpu : usr=91.09%, sys=6.93%, ctx=15, majf=0, minf=0 00:30:13.635 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.635 issued rwts: total=2216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.635 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:13.635 filename0: (groupid=0, jobs=1): err= 0: pid=91239: Wed May 15 02:30:59 2024 00:30:13.635 read: IOPS=165, BW=20.7MiB/s (21.7MB/s)(207MiB/10003msec) 00:30:13.635 slat (nsec): min=4710, max=58152, avg=16050.37, stdev=5164.15 00:30:13.636 clat (usec): min=10339, max=27631, avg=18110.01, stdev=1903.66 00:30:13.636 lat (usec): min=10355, max=27646, avg=18126.06, stdev=1903.28 00:30:13.636 clat percentiles (usec): 00:30:13.636 | 1.00th=[12649], 5.00th=[15664], 10.00th=[16188], 20.00th=[16909], 00:30:13.636 | 30.00th=[17171], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:30:13.636 | 70.00th=[18744], 80.00th=[19268], 90.00th=[20317], 95.00th=[21890], 00:30:13.636 | 99.00th=[24511], 99.50th=[24773], 99.90th=[27395], 99.95th=[27657], 00:30:13.636 | 99.99th=[27657] 00:30:13.636 bw ( KiB/s): min=17664, max=23040, per=28.01%, avg=21194.11, stdev=1315.29, samples=19 00:30:13.636 iops : min= 138, max= 180, avg=165.58, stdev=10.28, samples=19 00:30:13.636 lat (msec) : 20=88.34%, 50=11.66% 00:30:13.636 cpu : usr=92.84%, sys=5.77%, ctx=30, majf=0, minf=9 00:30:13.636 IO depths : 1=2.5%, 2=97.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.636 issued rwts: total=1655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.636 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:13.636 filename0: (groupid=0, jobs=1): err= 0: pid=91240: Wed May 15 02:30:59 2024 00:30:13.636 read: IOPS=204, BW=25.5MiB/s (26.8MB/s)(255MiB/10003msec) 00:30:13.636 slat (nsec): min=8021, max=52610, avg=15814.33, stdev=5390.86 00:30:13.636 clat (usec): min=7964, max=56616, avg=14666.37, stdev=2315.47 00:30:13.636 lat (usec): min=7989, max=56630, avg=14682.18, stdev=2315.91 00:30:13.636 clat percentiles (usec): 00:30:13.636 | 1.00th=[11469], 5.00th=[12518], 10.00th=[12911], 20.00th=[13304], 00:30:13.636 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14353], 60.00th=[14746], 00:30:13.636 | 70.00th=[15139], 80.00th=[15664], 90.00th=[16712], 95.00th=[17695], 00:30:13.636 | 99.00th=[20579], 99.50th=[22414], 99.90th=[54789], 99.95th=[55837], 00:30:13.636 | 99.99th=[56361] 00:30:13.636 bw ( KiB/s): min=22272, max=28928, per=34.51%, avg=26112.00, stdev=1594.16, samples=19 00:30:13.636 iops : min= 174, max= 226, avg=204.00, stdev=12.45, samples=19 00:30:13.636 lat (msec) : 10=0.34%, 20=98.34%, 50=1.17%, 100=0.15% 00:30:13.636 cpu : usr=91.97%, sys=6.26%, ctx=34, majf=0, minf=0 00:30:13.636 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.636 issued rwts: total=2043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.636 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:13.636 00:30:13.636 Run status group 0 (all jobs): 00:30:13.636 READ: bw=73.9MiB/s (77.5MB/s), 20.7MiB/s-27.7MiB/s (21.7MB/s-29.0MB/s), io=739MiB (775MB), run=10003-10004msec 00:30:13.636 02:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:13.636 02:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:13.636 02:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:13.636 02:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:13.636 02:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:13.636 02:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:13.636 02:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.636 02:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:13.636 02:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.636 02:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:13.636 02:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.636 02:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:13.636 ************************************ 00:30:13.636 END TEST fio_dif_digest 00:30:13.636 ************************************ 00:30:13.636 02:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.636 00:30:13.636 real 0m10.908s 00:30:13.636 user 0m28.179s 00:30:13.636 sys 0m2.119s 00:30:13.636 02:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:13.636 02:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:13.636 02:31:00 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:13.636 02:31:00 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:13.636 rmmod nvme_tcp 00:30:13.636 rmmod nvme_fabrics 00:30:13.636 rmmod nvme_keyring 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 90769 ']' 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 90769 00:30:13.636 02:31:00 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 90769 ']' 00:30:13.636 02:31:00 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 90769 00:30:13.636 02:31:00 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:30:13.636 02:31:00 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:13.636 02:31:00 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90769 00:30:13.636 02:31:00 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:13.636 killing process with pid 90769 00:30:13.636 02:31:00 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:13.636 02:31:00 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90769' 00:30:13.636 02:31:00 nvmf_dif -- common/autotest_common.sh@965 -- # kill 90769 00:30:13.636 [2024-05-15 02:31:00.156512] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:13.636 02:31:00 nvmf_dif -- common/autotest_common.sh@970 -- # wait 90769 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:13.636 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:13.636 Waiting for block devices as requested 00:30:13.636 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:13.636 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.636 02:31:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:13.636 02:31:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.636 02:31:00 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:13.636 00:30:13.636 real 0m59.499s 00:30:13.636 user 3m49.351s 00:30:13.636 sys 0m16.295s 00:30:13.636 02:31:00 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:13.636 02:31:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:13.636 ************************************ 00:30:13.636 END TEST nvmf_dif 00:30:13.636 ************************************ 00:30:13.636 02:31:00 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:13.636 02:31:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:13.636 02:31:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:13.636 02:31:00 -- common/autotest_common.sh@10 -- # set +x 00:30:13.636 ************************************ 00:30:13.636 START TEST nvmf_abort_qd_sizes 00:30:13.636 ************************************ 00:30:13.636 02:31:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:13.636 * Looking for test storage... 00:30:13.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.636 02:31:01 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:13.637 Cannot find device "nvmf_tgt_br" 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:13.637 Cannot find device "nvmf_tgt_br2" 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:13.637 Cannot find device "nvmf_tgt_br" 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:13.637 Cannot find device "nvmf_tgt_br2" 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:13.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:13.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:13.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:30:13.637 00:30:13.637 --- 10.0.0.2 ping statistics --- 00:30:13.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.637 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:13.637 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:13.637 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:30:13.637 00:30:13.637 --- 10.0.0.3 ping statistics --- 00:30:13.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.637 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:13.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:30:13.637 00:30:13.637 --- 10.0.0.1 ping statistics --- 00:30:13.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.637 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:13.637 02:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:14.203 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:14.203 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:14.203 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:14.203 02:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.203 02:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:14.203 02:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:14.203 02:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.203 02:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:14.203 02:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:14.462 02:31:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:14.462 02:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:14.462 02:31:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:14.462 02:31:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:14.462 02:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=91751 00:30:14.462 02:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:14.462 02:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 91751 00:30:14.462 02:31:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 91751 ']' 00:30:14.462 02:31:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.462 02:31:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:14.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.462 02:31:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.462 02:31:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:14.462 02:31:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:14.462 [2024-05-15 02:31:02.308189] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:30:14.462 [2024-05-15 02:31:02.308296] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.462 [2024-05-15 02:31:02.457544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:14.720 [2024-05-15 02:31:02.550004] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.720 [2024-05-15 02:31:02.550107] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.720 [2024-05-15 02:31:02.550132] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.720 [2024-05-15 02:31:02.550148] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.720 [2024-05-15 02:31:02.550162] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.720 [2024-05-15 02:31:02.550309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.720 [2024-05-15 02:31:02.550428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.720 [2024-05-15 02:31:02.551505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:14.720 [2024-05-15 02:31:02.551529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.286 02:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:15.287 02:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:30:15.287 02:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:15.287 02:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:15.287 02:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:15.545 02:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.545 02:31:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:15.546 02:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:15.546 ************************************ 00:30:15.546 START TEST spdk_target_abort 00:30:15.546 ************************************ 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.546 spdk_targetn1 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.546 [2024-05-15 02:31:03.462216] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.546 [2024-05-15 02:31:03.490107] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:15.546 [2024-05-15 02:31:03.490819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:15.546 02:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:18.829 Initializing NVMe Controllers 00:30:18.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:18.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:18.830 Initialization complete. Launching workers. 00:30:18.830 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 7651, failed: 0 00:30:18.830 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1172, failed to submit 6479 00:30:18.830 success 774, unsuccess 398, failed 0 00:30:18.830 02:31:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:18.830 02:31:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:22.110 Initializing NVMe Controllers 00:30:22.111 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:22.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:22.111 Initialization complete. Launching workers. 00:30:22.111 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5928, failed: 0 00:30:22.111 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1225, failed to submit 4703 00:30:22.111 success 255, unsuccess 970, failed 0 00:30:22.111 02:31:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:22.111 02:31:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:25.425 Initializing NVMe Controllers 00:30:25.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:25.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:25.425 Initialization complete. Launching workers. 00:30:25.425 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28343, failed: 0 00:30:25.425 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2465, failed to submit 25878 00:30:25.425 success 361, unsuccess 2104, failed 0 00:30:25.425 02:31:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:25.425 02:31:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.425 02:31:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:25.425 02:31:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.425 02:31:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:25.425 02:31:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.425 02:31:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 91751 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 91751 ']' 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 91751 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91751 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:27.953 killing process with pid 91751 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91751' 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 91751 00:30:27.953 [2024-05-15 02:31:15.623506] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 91751 00:30:27.953 00:30:27.953 real 0m12.466s 00:30:27.953 user 0m50.358s 00:30:27.953 sys 0m1.878s 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:27.953 ************************************ 00:30:27.953 END TEST spdk_target_abort 00:30:27.953 ************************************ 00:30:27.953 02:31:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:27.953 02:31:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:27.953 02:31:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:27.953 02:31:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:27.953 ************************************ 00:30:27.953 START TEST kernel_target_abort 00:30:27.953 ************************************ 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:27.953 02:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:28.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:28.212 Waiting for block devices as requested 00:30:28.471 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:28.471 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:28.471 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:28.471 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:28.471 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:28.471 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:30:28.471 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:28.471 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:28.471 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:28.471 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:28.471 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:28.730 No valid GPT data, bailing 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:30:28.730 No valid GPT data, bailing 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:28.730 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:30:28.731 No valid GPT data, bailing 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:28.731 No valid GPT data, bailing 00:30:28.731 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d --hostid=b5f40b92-c680-4cc4-b45e-3788e6e7a27d -a 10.0.0.1 -t tcp -s 4420 00:30:28.990 00:30:28.990 Discovery Log Number of Records 2, Generation counter 2 00:30:28.990 =====Discovery Log Entry 0====== 00:30:28.990 trtype: tcp 00:30:28.990 adrfam: ipv4 00:30:28.990 subtype: current discovery subsystem 00:30:28.990 treq: not specified, sq flow control disable supported 00:30:28.990 portid: 1 00:30:28.990 trsvcid: 4420 00:30:28.990 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:28.990 traddr: 10.0.0.1 00:30:28.990 eflags: none 00:30:28.990 sectype: none 00:30:28.990 =====Discovery Log Entry 1====== 00:30:28.990 trtype: tcp 00:30:28.990 adrfam: ipv4 00:30:28.990 subtype: nvme subsystem 00:30:28.990 treq: not specified, sq flow control disable supported 00:30:28.990 portid: 1 00:30:28.990 trsvcid: 4420 00:30:28.990 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:28.990 traddr: 10.0.0.1 00:30:28.990 eflags: none 00:30:28.990 sectype: none 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:28.990 02:31:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:32.272 Initializing NVMe Controllers 00:30:32.272 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:32.272 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:32.272 Initialization complete. Launching workers. 00:30:32.272 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31201, failed: 0 00:30:32.272 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31201, failed to submit 0 00:30:32.272 success 0, unsuccess 31201, failed 0 00:30:32.272 02:31:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:32.272 02:31:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:35.599 Initializing NVMe Controllers 00:30:35.599 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:35.599 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:35.599 Initialization complete. Launching workers. 00:30:35.599 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55197, failed: 0 00:30:35.599 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22369, failed to submit 32828 00:30:35.599 success 0, unsuccess 22369, failed 0 00:30:35.599 02:31:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:35.599 02:31:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:38.879 Initializing NVMe Controllers 00:30:38.879 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:38.879 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:38.879 Initialization complete. Launching workers. 00:30:38.879 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75735, failed: 0 00:30:38.879 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18932, failed to submit 56803 00:30:38.879 success 0, unsuccess 18932, failed 0 00:30:38.879 02:31:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:38.879 02:31:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:38.879 02:31:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:38.879 02:31:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:38.879 02:31:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:38.879 02:31:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:38.879 02:31:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:38.879 02:31:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:38.879 02:31:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:38.879 02:31:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:39.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:41.037 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:41.037 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:41.037 ************************************ 00:30:41.037 END TEST kernel_target_abort 00:30:41.037 ************************************ 00:30:41.037 00:30:41.037 real 0m12.982s 00:30:41.037 user 0m6.183s 00:30:41.037 sys 0m4.140s 00:30:41.037 02:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:41.037 02:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:41.037 rmmod nvme_tcp 00:30:41.037 rmmod nvme_fabrics 00:30:41.037 rmmod nvme_keyring 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:41.037 Process with pid 91751 is not found 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 91751 ']' 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 91751 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 91751 ']' 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 91751 00:30:41.037 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (91751) - No such process 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 91751 is not found' 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:41.037 02:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:41.296 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:41.554 Waiting for block devices as requested 00:30:41.554 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:41.554 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:41.554 02:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:41.554 02:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:41.554 02:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:41.554 02:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:41.554 02:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.554 02:31:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:41.554 02:31:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.813 02:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:41.813 ************************************ 00:30:41.813 END TEST nvmf_abort_qd_sizes 00:30:41.813 ************************************ 00:30:41.813 00:30:41.813 real 0m28.644s 00:30:41.813 user 0m57.727s 00:30:41.813 sys 0m7.164s 00:30:41.813 02:31:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:41.813 02:31:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:41.813 02:31:29 -- spdk/autotest.sh@291 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:41.813 02:31:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:41.813 02:31:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:41.813 02:31:29 -- common/autotest_common.sh@10 -- # set +x 00:30:41.813 ************************************ 00:30:41.813 START TEST keyring_file 00:30:41.813 ************************************ 00:30:41.813 02:31:29 keyring_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:41.813 * Looking for test storage... 00:30:41.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:30:41.813 02:31:29 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=b5f40b92-c680-4cc4-b45e-3788e6e7a27d 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:41.813 02:31:29 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.813 02:31:29 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.813 02:31:29 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.813 02:31:29 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.813 02:31:29 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.813 02:31:29 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.813 02:31:29 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:41.813 02:31:29 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:41.813 02:31:29 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:41.813 02:31:29 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:41.813 02:31:29 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:41.813 02:31:29 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:41.813 02:31:29 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:41.813 02:31:29 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UBcVynXmCr 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UBcVynXmCr 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UBcVynXmCr 00:30:41.813 02:31:29 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.UBcVynXmCr 00:30:41.813 02:31:29 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0vHrJH8Rha 00:30:41.813 02:31:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:41.813 02:31:29 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:42.071 02:31:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0vHrJH8Rha 00:30:42.071 02:31:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0vHrJH8Rha 00:30:42.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.071 02:31:29 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.0vHrJH8Rha 00:30:42.071 02:31:29 keyring_file -- keyring/file.sh@30 -- # tgtpid=92479 00:30:42.071 02:31:29 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:42.071 02:31:29 keyring_file -- keyring/file.sh@32 -- # waitforlisten 92479 00:30:42.071 02:31:29 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 92479 ']' 00:30:42.071 02:31:29 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.071 02:31:29 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:42.071 02:31:29 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.071 02:31:29 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:42.071 02:31:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:42.071 [2024-05-15 02:31:29.905348] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:30:42.071 [2024-05-15 02:31:29.905707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92479 ] 00:30:42.071 [2024-05-15 02:31:30.040314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.329 [2024-05-15 02:31:30.104746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.895 02:31:30 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:42.895 02:31:30 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:30:42.895 02:31:30 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:42.895 02:31:30 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.895 02:31:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:42.895 [2024-05-15 02:31:30.858770] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.895 null0 00:30:42.895 [2024-05-15 02:31:30.890691] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:42.895 [2024-05-15 02:31:30.890956] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:42.895 [2024-05-15 02:31:30.891310] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:42.895 [2024-05-15 02:31:30.898719] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:42.895 02:31:30 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.895 02:31:30 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:42.895 02:31:30 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:42.895 02:31:30 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:42.895 02:31:30 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:42.895 02:31:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:42.895 02:31:30 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:42.895 02:31:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:42.895 02:31:30 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:42.895 02:31:30 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.895 02:31:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:43.153 [2024-05-15 02:31:30.914718] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:43.153 2024/05/15 02:31:30 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:30:43.153 request: 00:30:43.153 { 00:30:43.153 "method": "nvmf_subsystem_add_listener", 00:30:43.153 "params": { 00:30:43.153 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:43.153 "secure_channel": false, 00:30:43.153 "listen_address": { 00:30:43.153 "trtype": "tcp", 00:30:43.153 "traddr": "127.0.0.1", 00:30:43.153 "trsvcid": "4420" 00:30:43.153 } 00:30:43.153 } 00:30:43.153 } 00:30:43.153 Got JSON-RPC error response 00:30:43.153 GoRPCClient: error on JSON-RPC call 00:30:43.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:43.153 02:31:30 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:43.153 02:31:30 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:43.153 02:31:30 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:43.153 02:31:30 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:43.153 02:31:30 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:43.153 02:31:30 keyring_file -- keyring/file.sh@46 -- # bperfpid=92508 00:30:43.153 02:31:30 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:43.153 02:31:30 keyring_file -- keyring/file.sh@48 -- # waitforlisten 92508 /var/tmp/bperf.sock 00:30:43.153 02:31:30 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 92508 ']' 00:30:43.153 02:31:30 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:43.153 02:31:30 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:43.153 02:31:30 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:43.153 02:31:30 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:43.153 02:31:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:43.153 [2024-05-15 02:31:30.979356] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:30:43.153 [2024-05-15 02:31:30.979492] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92508 ] 00:30:43.153 [2024-05-15 02:31:31.117103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.411 [2024-05-15 02:31:31.176545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.981 02:31:31 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:43.981 02:31:31 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:30:43.981 02:31:31 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UBcVynXmCr 00:30:43.981 02:31:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UBcVynXmCr 00:30:44.240 02:31:32 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0vHrJH8Rha 00:30:44.240 02:31:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0vHrJH8Rha 00:30:44.499 02:31:32 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:44.499 02:31:32 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:44.499 02:31:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:44.499 02:31:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:44.499 02:31:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:44.757 02:31:32 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.UBcVynXmCr == \/\t\m\p\/\t\m\p\.\U\B\c\V\y\n\X\m\C\r ]] 00:30:44.757 02:31:32 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:44.757 02:31:32 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:44.757 02:31:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:44.757 02:31:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:44.757 02:31:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:45.015 02:31:32 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0vHrJH8Rha == \/\t\m\p\/\t\m\p\.\0\v\H\r\J\H\8\R\h\a ]] 00:30:45.015 02:31:32 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:45.015 02:31:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:45.015 02:31:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:45.015 02:31:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:45.015 02:31:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:45.015 02:31:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:45.273 02:31:33 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:45.273 02:31:33 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:45.273 02:31:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:45.273 02:31:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:45.273 02:31:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:45.273 02:31:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:45.273 02:31:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:45.837 02:31:33 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:45.837 02:31:33 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:45.837 02:31:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:45.837 [2024-05-15 02:31:33.820883] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:46.095 nvme0n1 00:30:46.095 02:31:33 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:46.095 02:31:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:46.095 02:31:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:46.095 02:31:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:46.095 02:31:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:46.095 02:31:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:46.353 02:31:34 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:46.353 02:31:34 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:46.353 02:31:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:46.353 02:31:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:46.353 02:31:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:46.353 02:31:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:46.353 02:31:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:46.610 02:31:34 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:46.610 02:31:34 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:46.867 Running I/O for 1 seconds... 00:30:47.799 00:30:47.799 Latency(us) 00:30:47.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.799 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:47.799 nvme0n1 : 1.01 11158.45 43.59 0.00 0.00 11425.63 6464.23 22997.18 00:30:47.799 =================================================================================================================== 00:30:47.799 Total : 11158.45 43.59 0.00 0.00 11425.63 6464.23 22997.18 00:30:47.799 0 00:30:47.799 02:31:35 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:47.799 02:31:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:48.056 02:31:36 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:48.056 02:31:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:48.056 02:31:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:48.056 02:31:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:48.056 02:31:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:48.056 02:31:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:48.315 02:31:36 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:48.315 02:31:36 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:48.315 02:31:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:48.315 02:31:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:48.315 02:31:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:48.315 02:31:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:48.315 02:31:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:48.881 02:31:36 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:48.881 02:31:36 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:48.881 02:31:36 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:48.881 02:31:36 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:48.881 02:31:36 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:48.881 02:31:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:48.881 02:31:36 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:48.881 02:31:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:48.881 02:31:36 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:48.881 02:31:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:48.881 [2024-05-15 02:31:36.886043] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:48.881 [2024-05-15 02:31:36.886112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b898e0 (107): Transport endpoint is not connected 00:30:48.881 [2024-05-15 02:31:36.887101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b898e0 (9): Bad file descriptor 00:30:48.881 [2024-05-15 02:31:36.888098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:48.881 [2024-05-15 02:31:36.888123] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:48.881 [2024-05-15 02:31:36.888134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:48.881 2024/05/15 02:31:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:30:48.881 request: 00:30:48.881 { 00:30:48.881 "method": "bdev_nvme_attach_controller", 00:30:48.881 "params": { 00:30:48.881 "name": "nvme0", 00:30:48.881 "trtype": "tcp", 00:30:48.881 "traddr": "127.0.0.1", 00:30:48.881 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:48.881 "adrfam": "ipv4", 00:30:48.881 "trsvcid": "4420", 00:30:48.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:48.881 "psk": "key1" 00:30:48.881 } 00:30:48.881 } 00:30:48.881 Got JSON-RPC error response 00:30:48.881 GoRPCClient: error on JSON-RPC call 00:30:49.140 02:31:36 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:49.140 02:31:36 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:49.140 02:31:36 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:49.140 02:31:36 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:49.140 02:31:36 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:49.140 02:31:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:49.140 02:31:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:49.140 02:31:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:49.140 02:31:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:49.140 02:31:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:49.398 02:31:37 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:49.398 02:31:37 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:49.398 02:31:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:49.398 02:31:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:49.398 02:31:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:49.398 02:31:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:49.398 02:31:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:49.708 02:31:37 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:49.708 02:31:37 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:49.708 02:31:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:49.967 02:31:37 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:49.967 02:31:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:50.225 02:31:38 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:50.225 02:31:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:50.225 02:31:38 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:50.483 02:31:38 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:50.483 02:31:38 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.UBcVynXmCr 00:30:50.483 02:31:38 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.UBcVynXmCr 00:30:50.483 02:31:38 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:50.483 02:31:38 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.UBcVynXmCr 00:30:50.483 02:31:38 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:50.483 02:31:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:50.483 02:31:38 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:50.483 02:31:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:50.483 02:31:38 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UBcVynXmCr 00:30:50.483 02:31:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UBcVynXmCr 00:30:50.743 [2024-05-15 02:31:38.599547] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.UBcVynXmCr': 0100660 00:30:50.743 [2024-05-15 02:31:38.599601] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:50.743 2024/05/15 02:31:38 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.UBcVynXmCr], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:30:50.743 request: 00:30:50.743 { 00:30:50.743 "method": "keyring_file_add_key", 00:30:50.743 "params": { 00:30:50.743 "name": "key0", 00:30:50.743 "path": "/tmp/tmp.UBcVynXmCr" 00:30:50.743 } 00:30:50.743 } 00:30:50.743 Got JSON-RPC error response 00:30:50.743 GoRPCClient: error on JSON-RPC call 00:30:50.743 02:31:38 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:50.743 02:31:38 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:50.743 02:31:38 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:50.743 02:31:38 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:50.743 02:31:38 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.UBcVynXmCr 00:30:50.743 02:31:38 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UBcVynXmCr 00:30:50.743 02:31:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UBcVynXmCr 00:30:51.002 02:31:38 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.UBcVynXmCr 00:30:51.002 02:31:38 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:51.002 02:31:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:51.002 02:31:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:51.002 02:31:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:51.002 02:31:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:51.002 02:31:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:51.261 02:31:39 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:51.261 02:31:39 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:51.261 02:31:39 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:51.261 02:31:39 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:51.261 02:31:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:51.261 02:31:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.261 02:31:39 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:51.261 02:31:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.261 02:31:39 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:51.261 02:31:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:51.520 [2024-05-15 02:31:39.523729] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.UBcVynXmCr': No such file or directory 00:30:51.520 [2024-05-15 02:31:39.523774] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:51.520 [2024-05-15 02:31:39.523800] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:51.520 [2024-05-15 02:31:39.523809] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:51.520 [2024-05-15 02:31:39.523818] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:51.520 2024/05/15 02:31:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:30:51.520 request: 00:30:51.520 { 00:30:51.520 "method": "bdev_nvme_attach_controller", 00:30:51.520 "params": { 00:30:51.520 "name": "nvme0", 00:30:51.520 "trtype": "tcp", 00:30:51.520 "traddr": "127.0.0.1", 00:30:51.520 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:51.520 "adrfam": "ipv4", 00:30:51.520 "trsvcid": "4420", 00:30:51.520 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:51.520 "psk": "key0" 00:30:51.520 } 00:30:51.520 } 00:30:51.520 Got JSON-RPC error response 00:30:51.520 GoRPCClient: error on JSON-RPC call 00:30:51.779 02:31:39 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:51.779 02:31:39 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:51.779 02:31:39 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:51.779 02:31:39 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:51.779 02:31:39 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:51.779 02:31:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:52.037 02:31:39 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:52.037 02:31:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:52.037 02:31:39 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:52.037 02:31:39 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:52.037 02:31:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:52.037 02:31:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:52.037 02:31:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1jDqHzI4Up 00:30:52.037 02:31:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:52.037 02:31:39 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:52.037 02:31:39 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:52.037 02:31:39 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:52.037 02:31:39 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:52.037 02:31:39 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:52.037 02:31:39 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:52.037 02:31:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1jDqHzI4Up 00:30:52.037 02:31:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1jDqHzI4Up 00:30:52.037 02:31:39 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.1jDqHzI4Up 00:30:52.037 02:31:39 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1jDqHzI4Up 00:30:52.037 02:31:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1jDqHzI4Up 00:30:52.295 02:31:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:52.295 02:31:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:52.552 nvme0n1 00:30:52.552 02:31:40 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:52.552 02:31:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:52.552 02:31:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:52.552 02:31:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:52.552 02:31:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:52.552 02:31:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:52.810 02:31:40 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:52.810 02:31:40 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:52.810 02:31:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:53.068 02:31:40 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:53.068 02:31:40 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:53.068 02:31:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.068 02:31:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.068 02:31:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:53.326 02:31:41 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:53.326 02:31:41 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:53.326 02:31:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:53.326 02:31:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:53.326 02:31:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.326 02:31:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.326 02:31:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:53.585 02:31:41 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:53.585 02:31:41 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:53.585 02:31:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:53.843 02:31:41 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:53.843 02:31:41 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:53.843 02:31:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.102 02:31:41 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:54.102 02:31:41 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1jDqHzI4Up 00:30:54.102 02:31:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1jDqHzI4Up 00:30:54.359 02:31:42 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0vHrJH8Rha 00:30:54.359 02:31:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0vHrJH8Rha 00:30:54.676 02:31:42 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:54.676 02:31:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:54.934 nvme0n1 00:30:54.934 02:31:42 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:54.934 02:31:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:55.196 02:31:43 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:55.196 "subsystems": [ 00:30:55.196 { 00:30:55.196 "subsystem": "keyring", 00:30:55.196 "config": [ 00:30:55.196 { 00:30:55.196 "method": "keyring_file_add_key", 00:30:55.196 "params": { 00:30:55.196 "name": "key0", 00:30:55.196 "path": "/tmp/tmp.1jDqHzI4Up" 00:30:55.196 } 00:30:55.196 }, 00:30:55.196 { 00:30:55.196 "method": "keyring_file_add_key", 00:30:55.196 "params": { 00:30:55.196 "name": "key1", 00:30:55.196 "path": "/tmp/tmp.0vHrJH8Rha" 00:30:55.196 } 00:30:55.196 } 00:30:55.196 ] 00:30:55.196 }, 00:30:55.196 { 00:30:55.196 "subsystem": "iobuf", 00:30:55.196 "config": [ 00:30:55.196 { 00:30:55.196 "method": "iobuf_set_options", 00:30:55.196 "params": { 00:30:55.196 "large_bufsize": 135168, 00:30:55.196 "large_pool_count": 1024, 00:30:55.196 "small_bufsize": 8192, 00:30:55.196 "small_pool_count": 8192 00:30:55.196 } 00:30:55.196 } 00:30:55.196 ] 00:30:55.196 }, 00:30:55.196 { 00:30:55.196 "subsystem": "sock", 00:30:55.196 "config": [ 00:30:55.196 { 00:30:55.196 "method": "sock_impl_set_options", 00:30:55.196 "params": { 00:30:55.196 "enable_ktls": false, 00:30:55.196 "enable_placement_id": 0, 00:30:55.196 "enable_quickack": false, 00:30:55.196 "enable_recv_pipe": true, 00:30:55.196 "enable_zerocopy_send_client": false, 00:30:55.196 "enable_zerocopy_send_server": true, 00:30:55.196 "impl_name": "posix", 00:30:55.196 "recv_buf_size": 2097152, 00:30:55.196 "send_buf_size": 2097152, 00:30:55.196 "tls_version": 0, 00:30:55.196 "zerocopy_threshold": 0 00:30:55.196 } 00:30:55.196 }, 00:30:55.196 { 00:30:55.196 "method": "sock_impl_set_options", 00:30:55.196 "params": { 00:30:55.196 "enable_ktls": false, 00:30:55.196 "enable_placement_id": 0, 00:30:55.196 "enable_quickack": false, 00:30:55.196 "enable_recv_pipe": true, 00:30:55.196 "enable_zerocopy_send_client": false, 00:30:55.196 "enable_zerocopy_send_server": true, 00:30:55.196 "impl_name": "ssl", 00:30:55.196 "recv_buf_size": 4096, 00:30:55.196 "send_buf_size": 4096, 00:30:55.196 "tls_version": 0, 00:30:55.196 "zerocopy_threshold": 0 00:30:55.196 } 00:30:55.196 } 00:30:55.196 ] 00:30:55.196 }, 00:30:55.196 { 00:30:55.196 "subsystem": "vmd", 00:30:55.196 "config": [] 00:30:55.196 }, 00:30:55.196 { 00:30:55.196 "subsystem": "accel", 00:30:55.196 "config": [ 00:30:55.196 { 00:30:55.196 "method": "accel_set_options", 00:30:55.196 "params": { 00:30:55.196 "buf_count": 2048, 00:30:55.196 "large_cache_size": 16, 00:30:55.196 "sequence_count": 2048, 00:30:55.196 "small_cache_size": 128, 00:30:55.196 "task_count": 2048 00:30:55.196 } 00:30:55.196 } 00:30:55.196 ] 00:30:55.196 }, 00:30:55.196 { 00:30:55.196 "subsystem": "bdev", 00:30:55.196 "config": [ 00:30:55.196 { 00:30:55.196 "method": "bdev_set_options", 00:30:55.196 "params": { 00:30:55.196 "bdev_auto_examine": true, 00:30:55.196 "bdev_io_cache_size": 256, 00:30:55.196 "bdev_io_pool_size": 65535, 00:30:55.196 "iobuf_large_cache_size": 16, 00:30:55.196 "iobuf_small_cache_size": 128 00:30:55.197 } 00:30:55.197 }, 00:30:55.197 { 00:30:55.197 "method": "bdev_raid_set_options", 00:30:55.197 "params": { 00:30:55.197 "process_window_size_kb": 1024 00:30:55.197 } 00:30:55.197 }, 00:30:55.197 { 00:30:55.197 "method": "bdev_iscsi_set_options", 00:30:55.197 "params": { 00:30:55.197 "timeout_sec": 30 00:30:55.197 } 00:30:55.197 }, 00:30:55.197 { 00:30:55.197 "method": "bdev_nvme_set_options", 00:30:55.197 "params": { 00:30:55.197 "action_on_timeout": "none", 00:30:55.197 "allow_accel_sequence": false, 00:30:55.197 "arbitration_burst": 0, 00:30:55.197 "bdev_retry_count": 3, 00:30:55.197 "ctrlr_loss_timeout_sec": 0, 00:30:55.197 "delay_cmd_submit": true, 00:30:55.197 "dhchap_dhgroups": [ 00:30:55.197 "null", 00:30:55.197 "ffdhe2048", 00:30:55.197 "ffdhe3072", 00:30:55.197 "ffdhe4096", 00:30:55.197 "ffdhe6144", 00:30:55.197 "ffdhe8192" 00:30:55.197 ], 00:30:55.197 "dhchap_digests": [ 00:30:55.197 "sha256", 00:30:55.197 "sha384", 00:30:55.197 "sha512" 00:30:55.197 ], 00:30:55.197 "disable_auto_failback": false, 00:30:55.197 "fast_io_fail_timeout_sec": 0, 00:30:55.197 "generate_uuids": false, 00:30:55.197 "high_priority_weight": 0, 00:30:55.197 "io_path_stat": false, 00:30:55.197 "io_queue_requests": 512, 00:30:55.197 "keep_alive_timeout_ms": 10000, 00:30:55.197 "low_priority_weight": 0, 00:30:55.197 "medium_priority_weight": 0, 00:30:55.197 "nvme_adminq_poll_period_us": 10000, 00:30:55.197 "nvme_error_stat": false, 00:30:55.197 "nvme_ioq_poll_period_us": 0, 00:30:55.197 "rdma_cm_event_timeout_ms": 0, 00:30:55.197 "rdma_max_cq_size": 0, 00:30:55.197 "rdma_srq_size": 0, 00:30:55.197 "reconnect_delay_sec": 0, 00:30:55.197 "timeout_admin_us": 0, 00:30:55.197 "timeout_us": 0, 00:30:55.197 "transport_ack_timeout": 0, 00:30:55.197 "transport_retry_count": 4, 00:30:55.197 "transport_tos": 0 00:30:55.197 } 00:30:55.197 }, 00:30:55.197 { 00:30:55.197 "method": "bdev_nvme_attach_controller", 00:30:55.197 "params": { 00:30:55.197 "adrfam": "IPv4", 00:30:55.197 "ctrlr_loss_timeout_sec": 0, 00:30:55.197 "ddgst": false, 00:30:55.197 "fast_io_fail_timeout_sec": 0, 00:30:55.197 "hdgst": false, 00:30:55.197 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:55.197 "name": "nvme0", 00:30:55.197 "prchk_guard": false, 00:30:55.197 "prchk_reftag": false, 00:30:55.197 "psk": "key0", 00:30:55.197 "reconnect_delay_sec": 0, 00:30:55.197 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.197 "traddr": "127.0.0.1", 00:30:55.197 "trsvcid": "4420", 00:30:55.197 "trtype": "TCP" 00:30:55.197 } 00:30:55.197 }, 00:30:55.197 { 00:30:55.197 "method": "bdev_nvme_set_hotplug", 00:30:55.197 "params": { 00:30:55.197 "enable": false, 00:30:55.197 "period_us": 100000 00:30:55.197 } 00:30:55.197 }, 00:30:55.197 { 00:30:55.197 "method": "bdev_wait_for_examine" 00:30:55.197 } 00:30:55.197 ] 00:30:55.197 }, 00:30:55.197 { 00:30:55.197 "subsystem": "nbd", 00:30:55.197 "config": [] 00:30:55.197 } 00:30:55.197 ] 00:30:55.197 }' 00:30:55.197 02:31:43 keyring_file -- keyring/file.sh@114 -- # killprocess 92508 00:30:55.197 02:31:43 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 92508 ']' 00:30:55.197 02:31:43 keyring_file -- common/autotest_common.sh@950 -- # kill -0 92508 00:30:55.197 02:31:43 keyring_file -- common/autotest_common.sh@951 -- # uname 00:30:55.197 02:31:43 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:55.197 02:31:43 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92508 00:30:55.456 killing process with pid 92508 00:30:55.456 Received shutdown signal, test time was about 1.000000 seconds 00:30:55.456 00:30:55.456 Latency(us) 00:30:55.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.456 =================================================================================================================== 00:30:55.456 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:55.456 02:31:43 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:55.456 02:31:43 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:55.456 02:31:43 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92508' 00:30:55.456 02:31:43 keyring_file -- common/autotest_common.sh@965 -- # kill 92508 00:30:55.456 02:31:43 keyring_file -- common/autotest_common.sh@970 -- # wait 92508 00:30:55.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:55.456 02:31:43 keyring_file -- keyring/file.sh@117 -- # bperfpid=92913 00:30:55.456 02:31:43 keyring_file -- keyring/file.sh@119 -- # waitforlisten 92913 /var/tmp/bperf.sock 00:30:55.456 02:31:43 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:55.456 02:31:43 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 92913 ']' 00:30:55.456 02:31:43 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:55.456 02:31:43 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:55.456 02:31:43 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:55.456 02:31:43 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:55.456 02:31:43 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:55.456 "subsystems": [ 00:30:55.456 { 00:30:55.456 "subsystem": "keyring", 00:30:55.456 "config": [ 00:30:55.456 { 00:30:55.456 "method": "keyring_file_add_key", 00:30:55.456 "params": { 00:30:55.456 "name": "key0", 00:30:55.456 "path": "/tmp/tmp.1jDqHzI4Up" 00:30:55.456 } 00:30:55.456 }, 00:30:55.456 { 00:30:55.456 "method": "keyring_file_add_key", 00:30:55.456 "params": { 00:30:55.456 "name": "key1", 00:30:55.456 "path": "/tmp/tmp.0vHrJH8Rha" 00:30:55.456 } 00:30:55.456 } 00:30:55.456 ] 00:30:55.456 }, 00:30:55.456 { 00:30:55.456 "subsystem": "iobuf", 00:30:55.456 "config": [ 00:30:55.456 { 00:30:55.456 "method": "iobuf_set_options", 00:30:55.456 "params": { 00:30:55.456 "large_bufsize": 135168, 00:30:55.456 "large_pool_count": 1024, 00:30:55.456 "small_bufsize": 8192, 00:30:55.456 "small_pool_count": 8192 00:30:55.456 } 00:30:55.456 } 00:30:55.456 ] 00:30:55.456 }, 00:30:55.456 { 00:30:55.456 "subsystem": "sock", 00:30:55.456 "config": [ 00:30:55.456 { 00:30:55.456 "method": "sock_impl_set_options", 00:30:55.456 "params": { 00:30:55.456 "enable_ktls": false, 00:30:55.456 "enable_placement_id": 0, 00:30:55.456 "enable_quickack": false, 00:30:55.456 "enable_recv_pipe": true, 00:30:55.456 "enable_zerocopy_send_client": false, 00:30:55.456 "enable_zerocopy_send_server": true, 00:30:55.456 "impl_name": "posix", 00:30:55.456 "recv_buf_size": 2097152, 00:30:55.456 "send_buf_size": 2097152, 00:30:55.456 "tls_version": 0, 00:30:55.456 "zerocopy_threshold": 0 00:30:55.456 } 00:30:55.456 }, 00:30:55.456 { 00:30:55.456 "method": "sock_impl_set_options", 00:30:55.456 "params": { 00:30:55.456 "enable_ktls": false, 00:30:55.456 "enable_placement_id": 0, 00:30:55.456 "enable_quickack": false, 00:30:55.456 "enable_recv_pipe": true, 00:30:55.456 "enable_zerocopy_send_client": false, 00:30:55.456 "enable_zerocopy_send_server": true, 00:30:55.456 "impl_name": "ssl", 00:30:55.456 "recv_buf_size": 4096, 00:30:55.456 "send_buf_size": 4096, 00:30:55.456 "tls_version": 0, 00:30:55.456 "zerocopy_threshold": 0 00:30:55.456 } 00:30:55.456 } 00:30:55.456 ] 00:30:55.456 }, 00:30:55.456 { 00:30:55.456 "subsystem": "vmd", 00:30:55.456 "config": [] 00:30:55.456 }, 00:30:55.456 { 00:30:55.456 "subsystem": "accel", 00:30:55.456 "config": [ 00:30:55.456 { 00:30:55.456 "method": "accel_set_options", 00:30:55.456 "params": { 00:30:55.456 "buf_count": 2048, 00:30:55.456 "large_cache_size": 16, 00:30:55.456 "sequence_count": 2048, 00:30:55.456 "small_cache_size": 128, 00:30:55.456 "task_count": 2048 00:30:55.456 } 00:30:55.456 } 00:30:55.456 ] 00:30:55.456 }, 00:30:55.456 { 00:30:55.456 "subsystem": "bdev", 00:30:55.456 "config": [ 00:30:55.456 { 00:30:55.456 "method": "bdev_set_options", 00:30:55.456 "params": { 00:30:55.456 "bdev_auto_examine": true, 00:30:55.456 "bdev_io_cache_size": 256, 00:30:55.456 "bdev_io_pool_size": 65535, 00:30:55.456 "iobuf_large_cache_size": 16, 00:30:55.456 "iobuf_small_cache_size": 128 00:30:55.456 } 00:30:55.456 }, 00:30:55.456 { 00:30:55.456 "method": "bdev_raid_set_options", 00:30:55.456 "params": { 00:30:55.456 "process_window_size_kb": 1024 00:30:55.456 } 00:30:55.456 }, 00:30:55.456 { 00:30:55.456 "method": "bdev_iscsi_set_options", 00:30:55.456 "params": { 00:30:55.456 "timeout_sec": 30 00:30:55.456 } 00:30:55.456 }, 00:30:55.456 { 00:30:55.456 "method": "bdev_nvme_set_options", 00:30:55.456 "params": { 00:30:55.456 "action_on_timeout": "none", 00:30:55.456 "allow_accel_sequence": false, 00:30:55.456 "arbitration_burst": 0, 00:30:55.456 "bdev_retry_count": 3, 00:30:55.456 "ctrlr_loss_timeout_sec": 0, 00:30:55.456 "delay_cmd_submit": true, 00:30:55.456 "dhchap_dhgroups": [ 00:30:55.456 "null", 00:30:55.456 "ffdhe2048", 00:30:55.456 "ffdhe3072", 00:30:55.456 "ffdhe4096", 00:30:55.456 "ffdhe6144", 00:30:55.456 "ffdhe8192" 00:30:55.456 ], 00:30:55.456 "dhchap_digests": [ 00:30:55.456 "sha256", 00:30:55.456 "sha384", 00:30:55.456 "sha512" 00:30:55.456 ], 00:30:55.456 "disable_auto_failback": false, 00:30:55.456 "fast_io_fail_timeout_sec": 0, 00:30:55.457 "generate_uuids": false, 00:30:55.457 "high_priority_weight": 0, 00:30:55.457 "io_path_stat": false, 00:30:55.457 "io_queue_requests": 512, 00:30:55.457 "keep_alive_timeout_ms": 10000, 00:30:55.457 "low_priority_weight": 0, 00:30:55.457 "medium_priority_weight": 0, 00:30:55.457 "nvme_adminq_poll_period_us": 10000, 00:30:55.457 "nvme_error_stat": false, 00:30:55.457 "nvme_ioq_poll_period_us": 0, 00:30:55.457 "rdma_cm_event_timeout_ms": 0, 00:30:55.457 "rdma_max_cq_size": 0, 00:30:55.457 "rdma_srq_size": 0, 00:30:55.457 "reconnect_delay_sec": 0, 00:30:55.457 "timeout_admin_us": 0, 00:30:55.457 "timeout_us": 0, 00:30:55.457 "transport_ack_timeout": 0, 00:30:55.457 "transport_retry_count": 4, 00:30:55.457 "transport_tos": 0 00:30:55.457 } 00:30:55.457 }, 00:30:55.457 { 00:30:55.457 "method": "bdev_nvme_attach_controller", 00:30:55.457 "params": { 00:30:55.457 "adrfam": "IPv4", 00:30:55.457 "ctrlr_loss_timeout_sec": 0, 00:30:55.457 "ddgst": false, 00:30:55.457 "fast_io_fail_timeout_sec": 0, 00:30:55.457 "hdgst": false, 00:30:55.457 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:55.457 "name": "nvme0", 00:30:55.457 "prchk_guard": false, 00:30:55.457 "prchk_reftag": false, 00:30:55.457 "psk": "key0", 00:30:55.457 "reconnect_delay_sec": 0, 00:30:55.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.457 "traddr": "127.0.0.1", 00:30:55.457 "trsvcid": "4420", 00:30:55.457 "trtype": "TCP" 00:30:55.457 } 00:30:55.457 }, 00:30:55.457 { 00:30:55.457 "method": "bdev_nvme_set_hotplug", 00:30:55.457 "params": { 00:30:55.457 "enable": false, 00:30:55.457 "period_us": 100000 00:30:55.457 } 00:30:55.457 }, 00:30:55.457 { 00:30:55.457 "method": "bdev_wait_for_examine" 00:30:55.457 } 00:30:55.457 ] 00:30:55.457 }, 00:30:55.457 { 00:30:55.457 "subsystem": "nbd", 00:30:55.457 "config": [] 00:30:55.457 } 00:30:55.457 ] 00:30:55.457 }' 00:30:55.457 02:31:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:55.457 [2024-05-15 02:31:43.457914] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:30:55.457 [2024-05-15 02:31:43.458826] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92913 ] 00:30:55.715 [2024-05-15 02:31:43.594616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.715 [2024-05-15 02:31:43.653960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.972 [2024-05-15 02:31:43.788254] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:56.538 02:31:44 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:56.538 02:31:44 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:30:56.538 02:31:44 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:56.538 02:31:44 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:56.538 02:31:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:56.795 02:31:44 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:56.795 02:31:44 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:56.795 02:31:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:56.795 02:31:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:56.795 02:31:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:56.795 02:31:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:56.795 02:31:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:57.053 02:31:45 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:57.053 02:31:45 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:57.053 02:31:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:57.053 02:31:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:57.053 02:31:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.053 02:31:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.053 02:31:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:57.310 02:31:45 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:57.310 02:31:45 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:57.310 02:31:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:57.310 02:31:45 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:57.875 02:31:45 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:57.875 02:31:45 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:57.875 02:31:45 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.1jDqHzI4Up /tmp/tmp.0vHrJH8Rha 00:30:57.875 02:31:45 keyring_file -- keyring/file.sh@20 -- # killprocess 92913 00:30:57.875 02:31:45 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 92913 ']' 00:30:57.875 02:31:45 keyring_file -- common/autotest_common.sh@950 -- # kill -0 92913 00:30:57.875 02:31:45 keyring_file -- common/autotest_common.sh@951 -- # uname 00:30:57.875 02:31:45 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:57.875 02:31:45 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92913 00:30:57.875 killing process with pid 92913 00:30:57.875 Received shutdown signal, test time was about 1.000000 seconds 00:30:57.875 00:30:57.875 Latency(us) 00:30:57.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.875 =================================================================================================================== 00:30:57.876 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:57.876 02:31:45 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:57.876 02:31:45 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:57.876 02:31:45 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92913' 00:30:57.876 02:31:45 keyring_file -- common/autotest_common.sh@965 -- # kill 92913 00:30:57.876 02:31:45 keyring_file -- common/autotest_common.sh@970 -- # wait 92913 00:30:57.876 02:31:45 keyring_file -- keyring/file.sh@21 -- # killprocess 92479 00:30:57.876 02:31:45 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 92479 ']' 00:30:57.876 02:31:45 keyring_file -- common/autotest_common.sh@950 -- # kill -0 92479 00:30:57.876 02:31:45 keyring_file -- common/autotest_common.sh@951 -- # uname 00:30:57.876 02:31:45 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:57.876 02:31:45 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92479 00:30:57.876 killing process with pid 92479 00:30:57.876 02:31:45 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:57.876 02:31:45 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:57.876 02:31:45 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92479' 00:30:57.876 02:31:45 keyring_file -- common/autotest_common.sh@965 -- # kill 92479 00:30:57.876 [2024-05-15 02:31:45.822405] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:57.876 [2024-05-15 02:31:45.822446] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:57.876 02:31:45 keyring_file -- common/autotest_common.sh@970 -- # wait 92479 00:30:58.133 00:30:58.133 real 0m16.480s 00:30:58.133 user 0m41.699s 00:30:58.133 sys 0m3.015s 00:30:58.133 02:31:46 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:58.133 02:31:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:58.133 ************************************ 00:30:58.133 END TEST keyring_file 00:30:58.133 ************************************ 00:30:58.391 02:31:46 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:30:58.391 02:31:46 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:30:58.391 02:31:46 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:58.391 02:31:46 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:58.391 02:31:46 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:30:58.391 02:31:46 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:30:58.391 02:31:46 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:30:58.391 02:31:46 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:58.391 02:31:46 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:58.391 02:31:46 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:58.391 02:31:46 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:30:58.391 02:31:46 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:58.391 02:31:46 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:30:58.391 02:31:46 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:58.391 02:31:46 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:58.391 02:31:46 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:58.391 02:31:46 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:30:58.391 02:31:46 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:30:58.391 02:31:46 -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:58.391 02:31:46 -- common/autotest_common.sh@10 -- # set +x 00:30:58.391 02:31:46 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:30:58.391 02:31:46 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:30:58.391 02:31:46 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:30:58.391 02:31:46 -- common/autotest_common.sh@10 -- # set +x 00:30:59.763 INFO: APP EXITING 00:30:59.763 INFO: killing all VMs 00:30:59.763 INFO: killing vhost app 00:30:59.763 INFO: EXIT DONE 00:31:00.329 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:00.329 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:00.329 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:01.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:01.269 Cleaning 00:31:01.269 Removing: /var/run/dpdk/spdk0/config 00:31:01.269 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:01.269 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:01.269 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:01.269 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:01.269 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:01.269 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:01.269 Removing: /var/run/dpdk/spdk1/config 00:31:01.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:01.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:01.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:01.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:01.269 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:01.269 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:01.269 Removing: /var/run/dpdk/spdk2/config 00:31:01.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:01.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:01.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:01.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:01.269 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:01.269 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:01.269 Removing: /var/run/dpdk/spdk3/config 00:31:01.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:01.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:01.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:01.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:01.269 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:01.269 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:01.269 Removing: /var/run/dpdk/spdk4/config 00:31:01.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:01.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:01.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:01.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:01.269 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:01.269 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:01.269 Removing: /dev/shm/nvmf_trace.0 00:31:01.269 Removing: /dev/shm/spdk_tgt_trace.pid59904 00:31:01.269 Removing: /var/run/dpdk/spdk0 00:31:01.269 Removing: /var/run/dpdk/spdk1 00:31:01.269 Removing: /var/run/dpdk/spdk2 00:31:01.269 Removing: /var/run/dpdk/spdk3 00:31:01.269 Removing: /var/run/dpdk/spdk4 00:31:01.269 Removing: /var/run/dpdk/spdk_pid59759 00:31:01.269 Removing: /var/run/dpdk/spdk_pid59904 00:31:01.269 Removing: /var/run/dpdk/spdk_pid60146 00:31:01.269 Removing: /var/run/dpdk/spdk_pid60233 00:31:01.269 Removing: /var/run/dpdk/spdk_pid60278 00:31:01.269 Removing: /var/run/dpdk/spdk_pid60382 00:31:01.269 Removing: /var/run/dpdk/spdk_pid60412 00:31:01.269 Removing: /var/run/dpdk/spdk_pid60530 00:31:01.269 Removing: /var/run/dpdk/spdk_pid60810 00:31:01.269 Removing: /var/run/dpdk/spdk_pid60981 00:31:01.269 Removing: /var/run/dpdk/spdk_pid61062 00:31:01.269 Removing: /var/run/dpdk/spdk_pid61136 00:31:01.269 Removing: /var/run/dpdk/spdk_pid61225 00:31:01.269 Removing: /var/run/dpdk/spdk_pid61264 00:31:01.269 Removing: /var/run/dpdk/spdk_pid61294 00:31:01.269 Removing: /var/run/dpdk/spdk_pid61355 00:31:01.269 Removing: /var/run/dpdk/spdk_pid61456 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62094 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62153 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62216 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62236 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62310 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62338 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62417 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62445 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62496 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62513 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62564 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62581 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62727 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62763 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62837 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62907 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62931 00:31:01.269 Removing: /var/run/dpdk/spdk_pid62990 00:31:01.269 Removing: /var/run/dpdk/spdk_pid63024 00:31:01.269 Removing: /var/run/dpdk/spdk_pid63059 00:31:01.269 Removing: /var/run/dpdk/spdk_pid63088 00:31:01.269 Removing: /var/run/dpdk/spdk_pid63128 00:31:01.269 Removing: /var/run/dpdk/spdk_pid63157 00:31:01.269 Removing: /var/run/dpdk/spdk_pid63197 00:31:01.269 Removing: /var/run/dpdk/spdk_pid63226 00:31:01.270 Removing: /var/run/dpdk/spdk_pid63261 00:31:01.270 Removing: /var/run/dpdk/spdk_pid63295 00:31:01.270 Removing: /var/run/dpdk/spdk_pid63324 00:31:01.270 Removing: /var/run/dpdk/spdk_pid63363 00:31:01.270 Removing: /var/run/dpdk/spdk_pid63393 00:31:01.270 Removing: /var/run/dpdk/spdk_pid63428 00:31:01.270 Removing: /var/run/dpdk/spdk_pid63464 00:31:01.270 Removing: /var/run/dpdk/spdk_pid63495 00:31:01.270 Removing: /var/run/dpdk/spdk_pid63518 00:31:01.270 Removing: /var/run/dpdk/spdk_pid63549 00:31:01.270 Removing: /var/run/dpdk/spdk_pid63575 00:31:01.270 Removing: /var/run/dpdk/spdk_pid63603 00:31:01.270 Removing: /var/run/dpdk/spdk_pid63627 00:31:01.270 Removing: /var/run/dpdk/spdk_pid63691 00:31:01.270 Removing: /var/run/dpdk/spdk_pid63785 00:31:01.270 Removing: /var/run/dpdk/spdk_pid64170 00:31:01.270 Removing: /var/run/dpdk/spdk_pid67023 00:31:01.270 Removing: /var/run/dpdk/spdk_pid67317 00:31:01.270 Removing: /var/run/dpdk/spdk_pid69518 00:31:01.270 Removing: /var/run/dpdk/spdk_pid69824 00:31:01.270 Removing: /var/run/dpdk/spdk_pid70049 00:31:01.535 Removing: /var/run/dpdk/spdk_pid70076 00:31:01.535 Removing: /var/run/dpdk/spdk_pid70817 00:31:01.535 Removing: /var/run/dpdk/spdk_pid70855 00:31:01.535 Removing: /var/run/dpdk/spdk_pid71163 00:31:01.535 Removing: /var/run/dpdk/spdk_pid71590 00:31:01.535 Removing: /var/run/dpdk/spdk_pid71929 00:31:01.535 Removing: /var/run/dpdk/spdk_pid72756 00:31:01.535 Removing: /var/run/dpdk/spdk_pid73519 00:31:01.535 Removing: /var/run/dpdk/spdk_pid73574 00:31:01.535 Removing: /var/run/dpdk/spdk_pid73606 00:31:01.535 Removing: /var/run/dpdk/spdk_pid74864 00:31:01.535 Removing: /var/run/dpdk/spdk_pid75073 00:31:01.535 Removing: /var/run/dpdk/spdk_pid79171 00:31:01.535 Removing: /var/run/dpdk/spdk_pid79561 00:31:01.535 Removing: /var/run/dpdk/spdk_pid79609 00:31:01.535 Removing: /var/run/dpdk/spdk_pid79689 00:31:01.535 Removing: /var/run/dpdk/spdk_pid79723 00:31:01.535 Removing: /var/run/dpdk/spdk_pid79749 00:31:01.535 Removing: /var/run/dpdk/spdk_pid79774 00:31:01.535 Removing: /var/run/dpdk/spdk_pid79904 00:31:01.535 Removing: /var/run/dpdk/spdk_pid79977 00:31:01.535 Removing: /var/run/dpdk/spdk_pid80181 00:31:01.535 Removing: /var/run/dpdk/spdk_pid80267 00:31:01.535 Removing: /var/run/dpdk/spdk_pid80437 00:31:01.535 Removing: /var/run/dpdk/spdk_pid80534 00:31:01.535 Removing: /var/run/dpdk/spdk_pid80639 00:31:01.535 Removing: /var/run/dpdk/spdk_pid80943 00:31:01.535 Removing: /var/run/dpdk/spdk_pid81276 00:31:01.535 Removing: /var/run/dpdk/spdk_pid81534 00:31:01.535 Removing: /var/run/dpdk/spdk_pid81992 00:31:01.535 Removing: /var/run/dpdk/spdk_pid81994 00:31:01.535 Removing: /var/run/dpdk/spdk_pid82298 00:31:01.535 Removing: /var/run/dpdk/spdk_pid82306 00:31:01.535 Removing: /var/run/dpdk/spdk_pid82314 00:31:01.535 Removing: /var/run/dpdk/spdk_pid82327 00:31:01.535 Removing: /var/run/dpdk/spdk_pid82337 00:31:01.535 Removing: /var/run/dpdk/spdk_pid82609 00:31:01.535 Removing: /var/run/dpdk/spdk_pid82645 00:31:01.535 Removing: /var/run/dpdk/spdk_pid82933 00:31:01.535 Removing: /var/run/dpdk/spdk_pid83075 00:31:01.535 Removing: /var/run/dpdk/spdk_pid83461 00:31:01.535 Removing: /var/run/dpdk/spdk_pid83955 00:31:01.535 Removing: /var/run/dpdk/spdk_pid85092 00:31:01.535 Removing: /var/run/dpdk/spdk_pid85584 00:31:01.535 Removing: /var/run/dpdk/spdk_pid85586 00:31:01.535 Removing: /var/run/dpdk/spdk_pid87284 00:31:01.535 Removing: /var/run/dpdk/spdk_pid87338 00:31:01.535 Removing: /var/run/dpdk/spdk_pid87397 00:31:01.535 Removing: /var/run/dpdk/spdk_pid87458 00:31:01.535 Removing: /var/run/dpdk/spdk_pid87585 00:31:01.535 Removing: /var/run/dpdk/spdk_pid87638 00:31:01.535 Removing: /var/run/dpdk/spdk_pid87699 00:31:01.535 Removing: /var/run/dpdk/spdk_pid87752 00:31:01.535 Removing: /var/run/dpdk/spdk_pid88053 00:31:01.535 Removing: /var/run/dpdk/spdk_pid88605 00:31:01.535 Removing: /var/run/dpdk/spdk_pid89580 00:31:01.535 Removing: /var/run/dpdk/spdk_pid89704 00:31:01.535 Removing: /var/run/dpdk/spdk_pid89849 00:31:01.535 Removing: /var/run/dpdk/spdk_pid90072 00:31:01.535 Removing: /var/run/dpdk/spdk_pid90489 00:31:01.535 Removing: /var/run/dpdk/spdk_pid90498 00:31:01.535 Removing: /var/run/dpdk/spdk_pid90838 00:31:01.535 Removing: /var/run/dpdk/spdk_pid90928 00:31:01.535 Removing: /var/run/dpdk/spdk_pid91014 00:31:01.535 Removing: /var/run/dpdk/spdk_pid91075 00:31:01.535 Removing: /var/run/dpdk/spdk_pid91161 00:31:01.535 Removing: /var/run/dpdk/spdk_pid91229 00:31:01.535 Removing: /var/run/dpdk/spdk_pid91815 00:31:01.535 Removing: /var/run/dpdk/spdk_pid91831 00:31:01.535 Removing: /var/run/dpdk/spdk_pid91844 00:31:01.535 Removing: /var/run/dpdk/spdk_pid92062 00:31:01.535 Removing: /var/run/dpdk/spdk_pid92079 00:31:01.535 Removing: /var/run/dpdk/spdk_pid92091 00:31:01.535 Removing: /var/run/dpdk/spdk_pid92479 00:31:01.535 Removing: /var/run/dpdk/spdk_pid92508 00:31:01.535 Removing: /var/run/dpdk/spdk_pid92913 00:31:01.535 Clean 00:31:01.802 02:31:49 -- common/autotest_common.sh@1447 -- # return 0 00:31:01.802 02:31:49 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:31:01.802 02:31:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:01.802 02:31:49 -- common/autotest_common.sh@10 -- # set +x 00:31:01.802 02:31:49 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:31:01.802 02:31:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:01.802 02:31:49 -- common/autotest_common.sh@10 -- # set +x 00:31:01.802 02:31:49 -- spdk/autotest.sh@383 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:01.802 02:31:49 -- spdk/autotest.sh@385 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:01.802 02:31:49 -- spdk/autotest.sh@385 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:01.802 02:31:49 -- spdk/autotest.sh@387 -- # hash lcov 00:31:01.802 02:31:49 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:01.802 02:31:49 -- spdk/autotest.sh@389 -- # hostname 00:31:01.802 02:31:49 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:02.070 geninfo: WARNING: invalid characters removed from testname! 00:31:28.658 02:32:16 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:31.956 02:32:19 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:35.237 02:32:22 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:37.763 02:32:25 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:40.288 02:32:28 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:42.816 02:32:30 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:46.102 02:32:33 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:46.102 02:32:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:46.102 02:32:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:46.102 02:32:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.102 02:32:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.102 02:32:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.102 02:32:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.102 02:32:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.102 02:32:33 -- paths/export.sh@5 -- $ export PATH 00:31:46.102 02:32:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.102 02:32:33 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:46.102 02:32:33 -- common/autobuild_common.sh@437 -- $ date +%s 00:31:46.102 02:32:33 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715740353.XXXXXX 00:31:46.102 02:32:33 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715740353.N9v8Tl 00:31:46.102 02:32:33 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:31:46.102 02:32:33 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:31:46.102 02:32:33 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:31:46.102 02:32:33 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:46.102 02:32:33 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:46.102 02:32:33 -- common/autobuild_common.sh@453 -- $ get_config_params 00:31:46.102 02:32:33 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:31:46.102 02:32:33 -- common/autotest_common.sh@10 -- $ set +x 00:31:46.102 02:32:33 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:31:46.102 02:32:33 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:31:46.102 02:32:33 -- pm/common@17 -- $ local monitor 00:31:46.102 02:32:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:46.102 02:32:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:46.102 02:32:33 -- pm/common@25 -- $ sleep 1 00:31:46.102 02:32:33 -- pm/common@21 -- $ date +%s 00:31:46.102 02:32:33 -- pm/common@21 -- $ date +%s 00:31:46.102 02:32:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715740353 00:31:46.102 02:32:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715740353 00:31:46.102 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715740353_collect-vmstat.pm.log 00:31:46.102 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715740353_collect-cpu-load.pm.log 00:31:46.667 02:32:34 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:31:46.667 02:32:34 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:46.667 02:32:34 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:46.667 02:32:34 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:46.668 02:32:34 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:46.668 02:32:34 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:46.668 02:32:34 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:46.668 02:32:34 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:46.668 02:32:34 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:46.668 02:32:34 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:46.668 02:32:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:46.668 02:32:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:46.668 02:32:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:46.668 02:32:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:46.668 02:32:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:31:46.668 02:32:34 -- pm/common@44 -- $ pid=94529 00:31:46.668 02:32:34 -- pm/common@50 -- $ kill -TERM 94529 00:31:46.668 02:32:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:46.668 02:32:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:31:46.668 02:32:34 -- pm/common@44 -- $ pid=94531 00:31:46.668 02:32:34 -- pm/common@50 -- $ kill -TERM 94531 00:31:46.668 + [[ -n 5146 ]] 00:31:46.668 + sudo kill 5146 00:31:46.934 [Pipeline] } 00:31:46.952 [Pipeline] // timeout 00:31:46.958 [Pipeline] } 00:31:46.978 [Pipeline] // stage 00:31:46.983 [Pipeline] } 00:31:47.001 [Pipeline] // catchError 00:31:47.010 [Pipeline] stage 00:31:47.012 [Pipeline] { (Stop VM) 00:31:47.028 [Pipeline] sh 00:31:47.304 + vagrant halt 00:31:51.491 ==> default: Halting domain... 00:31:56.790 [Pipeline] sh 00:31:57.066 + vagrant destroy -f 00:32:01.249 ==> default: Removing domain... 00:32:01.261 [Pipeline] sh 00:32:01.540 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:32:01.548 [Pipeline] } 00:32:01.564 [Pipeline] // stage 00:32:01.570 [Pipeline] } 00:32:01.585 [Pipeline] // dir 00:32:01.590 [Pipeline] } 00:32:01.609 [Pipeline] // wrap 00:32:01.615 [Pipeline] } 00:32:01.630 [Pipeline] // catchError 00:32:01.639 [Pipeline] stage 00:32:01.641 [Pipeline] { (Epilogue) 00:32:01.656 [Pipeline] sh 00:32:01.929 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:08.504 [Pipeline] catchError 00:32:08.505 [Pipeline] { 00:32:08.519 [Pipeline] sh 00:32:08.795 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:09.052 Artifacts sizes are good 00:32:09.061 [Pipeline] } 00:32:09.078 [Pipeline] // catchError 00:32:09.088 [Pipeline] archiveArtifacts 00:32:09.094 Archiving artifacts 00:32:09.251 [Pipeline] cleanWs 00:32:09.261 [WS-CLEANUP] Deleting project workspace... 00:32:09.261 [WS-CLEANUP] Deferred wipeout is used... 00:32:09.267 [WS-CLEANUP] done 00:32:09.269 [Pipeline] } 00:32:09.285 [Pipeline] // stage 00:32:09.291 [Pipeline] } 00:32:09.306 [Pipeline] // node 00:32:09.312 [Pipeline] End of Pipeline 00:32:09.347 Finished: SUCCESS